Sep 6 00:59:12.562420 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Sep 6 00:59:12.562434 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:59:12.562441 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:59:12.562445 kernel: BIOS-provided physical RAM map: Sep 6 00:59:12.562449 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 6 00:59:12.562452 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 6 00:59:12.562457 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 6 00:59:12.562462 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 6 00:59:12.562466 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 6 00:59:12.562470 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfbdfff] usable Sep 6 00:59:12.562473 kernel: BIOS-e820: [mem 0x000000006dfbe000-0x000000006dfbefff] ACPI NVS Sep 6 00:59:12.562477 kernel: BIOS-e820: [mem 0x000000006dfbf000-0x000000006dfbffff] reserved Sep 6 00:59:12.562481 kernel: BIOS-e820: [mem 0x000000006dfc0000-0x0000000077fc6fff] usable Sep 6 00:59:12.562485 kernel: BIOS-e820: [mem 0x0000000077fc7000-0x00000000790a9fff] reserved Sep 6 00:59:12.562491 kernel: BIOS-e820: [mem 0x00000000790aa000-0x0000000079232fff] usable Sep 6 00:59:12.562495 kernel: BIOS-e820: [mem 0x0000000079233000-0x0000000079664fff] ACPI NVS Sep 6 00:59:12.562499 kernel: BIOS-e820: [mem 0x0000000079665000-0x000000007befefff] reserved Sep 6 00:59:12.562503 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Sep 6 00:59:12.562508 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Sep 6 00:59:12.562512 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 6 00:59:12.562516 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 6 00:59:12.562520 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 6 00:59:12.562524 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 6 00:59:12.562529 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 6 00:59:12.562533 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Sep 6 00:59:12.562537 kernel: NX (Execute Disable) protection: active Sep 6 00:59:12.562541 kernel: SMBIOS 3.2.1 present. Sep 6 00:59:12.562546 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Sep 6 00:59:12.562550 kernel: tsc: Detected 3400.000 MHz processor Sep 6 00:59:12.562554 kernel: tsc: Detected 3399.906 MHz TSC Sep 6 00:59:12.562558 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:59:12.562563 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:59:12.562568 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Sep 6 00:59:12.562573 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:59:12.562578 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Sep 6 00:59:12.562582 kernel: Using GB pages for direct mapping Sep 6 00:59:12.562586 kernel: ACPI: Early table checksum verification disabled Sep 6 00:59:12.562591 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 6 00:59:12.562595 kernel: ACPI: XSDT 0x00000000795460C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 6 00:59:12.562599 kernel: ACPI: FACP 0x0000000079582620 000114 (v06 01072009 AMI 00010013) Sep 6 00:59:12.562606 kernel: ACPI: DSDT 0x0000000079546268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 6 00:59:12.562611 kernel: ACPI: FACS 0x0000000079664F80 000040 Sep 6 00:59:12.562616 kernel: ACPI: APIC 0x0000000079582738 00012C (v04 01072009 AMI 00010013) Sep 6 00:59:12.562621 kernel: ACPI: FPDT 0x0000000079582868 000044 (v01 01072009 AMI 00010013) Sep 6 00:59:12.562625 kernel: ACPI: FIDT 0x00000000795828B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 6 00:59:12.562630 kernel: ACPI: MCFG 0x0000000079582950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 6 00:59:12.562634 kernel: ACPI: SPMI 0x0000000079582990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 6 00:59:12.562640 kernel: ACPI: SSDT 0x00000000795829D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 6 00:59:12.562645 kernel: ACPI: SSDT 0x00000000795844F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 6 00:59:12.562649 kernel: ACPI: SSDT 0x00000000795876C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 6 00:59:12.562654 kernel: ACPI: HPET 0x00000000795899F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 6 00:59:12.562659 kernel: ACPI: SSDT 0x0000000079589A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 6 00:59:12.562663 kernel: ACPI: SSDT 0x000000007958A9D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 6 00:59:12.562668 kernel: ACPI: UEFI 0x000000007958B2D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 6 00:59:12.562673 kernel: ACPI: LPIT 0x000000007958B318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 6 00:59:12.562677 kernel: ACPI: SSDT 0x000000007958B3B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 6 00:59:12.562683 kernel: ACPI: SSDT 0x000000007958DB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 6 00:59:12.562687 kernel: ACPI: DBGP 0x000000007958F078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 6 00:59:12.562692 kernel: ACPI: DBG2 0x000000007958F0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 6 00:59:12.562697 kernel: ACPI: SSDT 0x000000007958F108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 6 00:59:12.562701 kernel: ACPI: DMAR 0x0000000079590C70 0000A8 (v01 INTEL EDK2 00000002 01000013) Sep 6 00:59:12.562706 kernel: ACPI: SSDT 0x0000000079590D18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 6 00:59:12.562711 kernel: ACPI: TPM2 0x0000000079590E60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 6 00:59:12.562715 kernel: ACPI: SSDT 0x0000000079590E98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 6 00:59:12.562721 kernel: ACPI: WSMT 0x0000000079591C28 000028 (v01 \xf5m 01072009 AMI 00010013) Sep 6 00:59:12.562726 kernel: ACPI: EINJ 0x0000000079591C50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 6 00:59:12.562731 kernel: ACPI: ERST 0x0000000079591D80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 6 00:59:12.562735 kernel: ACPI: BERT 0x0000000079591FB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 6 00:59:12.562740 kernel: ACPI: HEST 0x0000000079591FE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 6 00:59:12.562745 kernel: ACPI: SSDT 0x0000000079592260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 6 00:59:12.562749 kernel: ACPI: Reserving FACP table memory at [mem 0x79582620-0x79582733] Sep 6 00:59:12.562754 kernel: ACPI: Reserving DSDT table memory at [mem 0x79546268-0x7958261e] Sep 6 00:59:12.562759 kernel: ACPI: Reserving FACS table memory at [mem 0x79664f80-0x79664fbf] Sep 6 00:59:12.562764 kernel: ACPI: Reserving APIC table memory at [mem 0x79582738-0x79582863] Sep 6 00:59:12.562769 kernel: ACPI: Reserving FPDT table memory at [mem 0x79582868-0x795828ab] Sep 6 00:59:12.562773 kernel: ACPI: Reserving FIDT table memory at [mem 0x795828b0-0x7958294b] Sep 6 00:59:12.562778 kernel: ACPI: Reserving MCFG table memory at [mem 0x79582950-0x7958298b] Sep 6 00:59:12.562783 kernel: ACPI: Reserving SPMI table memory at [mem 0x79582990-0x795829d0] Sep 6 00:59:12.562787 kernel: ACPI: Reserving SSDT table memory at [mem 0x795829d8-0x795844f3] Sep 6 00:59:12.562792 kernel: ACPI: Reserving SSDT table memory at [mem 0x795844f8-0x795876bd] Sep 6 00:59:12.562796 kernel: ACPI: Reserving SSDT table memory at [mem 0x795876c0-0x795899ea] Sep 6 00:59:12.562801 kernel: ACPI: Reserving HPET table memory at [mem 0x795899f0-0x79589a27] Sep 6 00:59:12.562806 kernel: ACPI: Reserving SSDT table memory at [mem 0x79589a28-0x7958a9d5] Sep 6 00:59:12.562811 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958a9d8-0x7958b2ce] Sep 6 00:59:12.562816 kernel: ACPI: Reserving UEFI table memory at [mem 0x7958b2d0-0x7958b311] Sep 6 00:59:12.562820 kernel: ACPI: Reserving LPIT table memory at [mem 0x7958b318-0x7958b3ab] Sep 6 00:59:12.562825 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958b3b0-0x7958db8d] Sep 6 00:59:12.562830 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958db90-0x7958f071] Sep 6 00:59:12.562834 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958f078-0x7958f0ab] Sep 6 00:59:12.562839 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958f0b0-0x7958f103] Sep 6 00:59:12.562844 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958f108-0x79590c6e] Sep 6 00:59:12.562849 kernel: ACPI: Reserving DMAR table memory at [mem 0x79590c70-0x79590d17] Sep 6 00:59:12.562854 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590d18-0x79590e5b] Sep 6 00:59:12.562858 kernel: ACPI: Reserving TPM2 table memory at [mem 0x79590e60-0x79590e93] Sep 6 00:59:12.562863 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590e98-0x79591c26] Sep 6 00:59:12.562868 kernel: ACPI: Reserving WSMT table memory at [mem 0x79591c28-0x79591c4f] Sep 6 00:59:12.562872 kernel: ACPI: Reserving EINJ table memory at [mem 0x79591c50-0x79591d7f] Sep 6 00:59:12.562877 kernel: ACPI: Reserving ERST table memory at [mem 0x79591d80-0x79591faf] Sep 6 00:59:12.562881 kernel: ACPI: Reserving BERT table memory at [mem 0x79591fb0-0x79591fdf] Sep 6 00:59:12.562886 kernel: ACPI: Reserving HEST table memory at [mem 0x79591fe0-0x7959225b] Sep 6 00:59:12.562891 kernel: ACPI: Reserving SSDT table memory at [mem 0x79592260-0x795923c1] Sep 6 00:59:12.562896 kernel: No NUMA configuration found Sep 6 00:59:12.562901 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Sep 6 00:59:12.562905 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Sep 6 00:59:12.562910 kernel: Zone ranges: Sep 6 00:59:12.562915 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:59:12.562920 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 6 00:59:12.562924 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Sep 6 00:59:12.562929 kernel: Movable zone start for each node Sep 6 00:59:12.562935 kernel: Early memory node ranges Sep 6 00:59:12.562939 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 6 00:59:12.562944 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 6 00:59:12.562949 kernel: node 0: [mem 0x0000000040400000-0x000000006dfbdfff] Sep 6 00:59:12.562953 kernel: node 0: [mem 0x000000006dfc0000-0x0000000077fc6fff] Sep 6 00:59:12.562958 kernel: node 0: [mem 0x00000000790aa000-0x0000000079232fff] Sep 6 00:59:12.562963 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Sep 6 00:59:12.562967 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Sep 6 00:59:12.562972 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Sep 6 00:59:12.562980 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:59:12.562986 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 6 00:59:12.562991 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 6 00:59:12.562996 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 6 00:59:12.563001 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Sep 6 00:59:12.563006 kernel: On node 0, zone DMA32: 11468 pages in unavailable ranges Sep 6 00:59:12.563012 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Sep 6 00:59:12.563017 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Sep 6 00:59:12.563022 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 6 00:59:12.563027 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 6 00:59:12.563033 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 6 00:59:12.563038 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 6 00:59:12.563042 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 6 00:59:12.563047 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 6 00:59:12.563052 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 6 00:59:12.563057 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 6 00:59:12.563062 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 6 00:59:12.563068 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 6 00:59:12.563073 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 6 00:59:12.563078 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 6 00:59:12.563083 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 6 00:59:12.563088 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 6 00:59:12.563093 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 6 00:59:12.563098 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 6 00:59:12.563103 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 6 00:59:12.563108 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 6 00:59:12.563113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:59:12.563118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:59:12.563123 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:59:12.563128 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:59:12.563133 kernel: TSC deadline timer available Sep 6 00:59:12.563138 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 6 00:59:12.563144 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Sep 6 00:59:12.563149 kernel: Booting paravirtualized kernel on bare hardware Sep 6 00:59:12.563154 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:59:12.563159 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Sep 6 00:59:12.563165 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Sep 6 00:59:12.563169 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Sep 6 00:59:12.563174 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 6 00:59:12.563179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222329 Sep 6 00:59:12.563184 kernel: Policy zone: Normal Sep 6 00:59:12.563190 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:59:12.563195 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:59:12.563201 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 6 00:59:12.563206 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 6 00:59:12.563211 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:59:12.563216 kernel: Memory: 32681620K/33411996K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 730116K reserved, 0K cma-reserved) Sep 6 00:59:12.563221 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 6 00:59:12.563226 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:59:12.563231 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:59:12.563236 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:59:12.563241 kernel: rcu: RCU event tracing is enabled. Sep 6 00:59:12.563247 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 6 00:59:12.563252 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:59:12.563258 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:59:12.563263 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:59:12.563268 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 6 00:59:12.563273 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 6 00:59:12.563278 kernel: random: crng init done Sep 6 00:59:12.563282 kernel: Console: colour dummy device 80x25 Sep 6 00:59:12.563287 kernel: printk: console [tty0] enabled Sep 6 00:59:12.563293 kernel: printk: console [ttyS1] enabled Sep 6 00:59:12.563298 kernel: ACPI: Core revision 20210730 Sep 6 00:59:12.563303 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Sep 6 00:59:12.563308 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:59:12.563313 kernel: DMAR: Host address width 39 Sep 6 00:59:12.563318 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Sep 6 00:59:12.563323 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Sep 6 00:59:12.563328 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 6 00:59:12.563333 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 6 00:59:12.563339 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Sep 6 00:59:12.563344 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Sep 6 00:59:12.563349 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Sep 6 00:59:12.563354 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 6 00:59:12.563359 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 6 00:59:12.563364 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 6 00:59:12.563369 kernel: x2apic enabled Sep 6 00:59:12.563374 kernel: Switched APIC routing to cluster x2apic. Sep 6 00:59:12.563379 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:59:12.563385 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 6 00:59:12.563390 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 6 00:59:12.563395 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 6 00:59:12.563400 kernel: process: using mwait in idle threads Sep 6 00:59:12.563405 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 6 00:59:12.563410 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 6 00:59:12.563415 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:59:12.563423 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:59:12.563428 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 6 00:59:12.563434 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 6 00:59:12.563439 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 6 00:59:12.563444 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 6 00:59:12.563449 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 6 00:59:12.563454 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:59:12.563459 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:59:12.563464 kernel: TAA: Mitigation: TSX disabled Sep 6 00:59:12.563469 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 6 00:59:12.563474 kernel: SRBDS: Mitigation: Microcode Sep 6 00:59:12.563480 kernel: GDS: Vulnerable: No microcode Sep 6 00:59:12.563485 kernel: active return thunk: its_return_thunk Sep 6 00:59:12.563490 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:59:12.563495 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:59:12.563500 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:59:12.563505 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:59:12.563510 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 6 00:59:12.563515 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 6 00:59:12.563520 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:59:12.563526 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 6 00:59:12.563530 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 6 00:59:12.563536 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 6 00:59:12.563541 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:59:12.563545 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:59:12.563550 kernel: LSM: Security Framework initializing Sep 6 00:59:12.563555 kernel: SELinux: Initializing. Sep 6 00:59:12.563561 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:59:12.563566 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:59:12.563572 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 6 00:59:12.563577 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 6 00:59:12.563582 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 6 00:59:12.563587 kernel: ... version: 4 Sep 6 00:59:12.563592 kernel: ... bit width: 48 Sep 6 00:59:12.563597 kernel: ... generic registers: 4 Sep 6 00:59:12.563602 kernel: ... value mask: 0000ffffffffffff Sep 6 00:59:12.563607 kernel: ... max period: 00007fffffffffff Sep 6 00:59:12.563612 kernel: ... fixed-purpose events: 3 Sep 6 00:59:12.563618 kernel: ... event mask: 000000070000000f Sep 6 00:59:12.563623 kernel: signal: max sigframe size: 2032 Sep 6 00:59:12.563628 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:59:12.563633 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 6 00:59:12.563638 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:59:12.563643 kernel: x86: Booting SMP configuration: Sep 6 00:59:12.563648 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Sep 6 00:59:12.563653 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 6 00:59:12.563659 kernel: #9 #10 #11 #12 #13 #14 #15 Sep 6 00:59:12.563664 kernel: smp: Brought up 1 node, 16 CPUs Sep 6 00:59:12.563669 kernel: smpboot: Max logical packages: 1 Sep 6 00:59:12.563674 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 6 00:59:12.563679 kernel: devtmpfs: initialized Sep 6 00:59:12.563684 kernel: x86/mm: Memory block size: 128MB Sep 6 00:59:12.563689 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfbe000-0x6dfbefff] (4096 bytes) Sep 6 00:59:12.563694 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79233000-0x79664fff] (4399104 bytes) Sep 6 00:59:12.563699 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:59:12.563705 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 6 00:59:12.563710 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:59:12.563715 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:59:12.563720 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:59:12.563725 kernel: audit: type=2000 audit(1757120346.132:1): state=initialized audit_enabled=0 res=1 Sep 6 00:59:12.563730 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:59:12.563735 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:59:12.563740 kernel: cpuidle: using governor menu Sep 6 00:59:12.563745 kernel: ACPI: bus type PCI registered Sep 6 00:59:12.563751 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:59:12.563756 kernel: dca service started, version 1.12.1 Sep 6 00:59:12.563761 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 6 00:59:12.563766 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Sep 6 00:59:12.563771 kernel: PCI: Using configuration type 1 for base access Sep 6 00:59:12.563776 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 6 00:59:12.563781 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:59:12.563786 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:59:12.563791 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:59:12.563797 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:59:12.563802 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:59:12.563807 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:59:12.563812 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:59:12.563817 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:59:12.563822 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:59:12.563827 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 6 00:59:12.563832 kernel: ACPI: Dynamic OEM Table Load: Sep 6 00:59:12.563837 kernel: ACPI: SSDT 0xFFFF9511C021DE00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 6 00:59:12.563843 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Sep 6 00:59:12.563848 kernel: ACPI: Dynamic OEM Table Load: Sep 6 00:59:12.563853 kernel: ACPI: SSDT 0xFFFF9511C1C5B800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 6 00:59:12.563858 kernel: ACPI: Dynamic OEM Table Load: Sep 6 00:59:12.563862 kernel: ACPI: SSDT 0xFFFF9511C1D4C000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 6 00:59:12.563867 kernel: ACPI: Dynamic OEM Table Load: Sep 6 00:59:12.563872 kernel: ACPI: SSDT 0xFFFF9511C014A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 6 00:59:12.563877 kernel: ACPI: Interpreter enabled Sep 6 00:59:12.563882 kernel: ACPI: PM: (supports S0 S5) Sep 6 00:59:12.563887 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:59:12.563893 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 6 00:59:12.563898 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 6 00:59:12.563903 kernel: HEST: Table parsing has been initialized. Sep 6 00:59:12.563908 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 6 00:59:12.563913 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:59:12.563918 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 6 00:59:12.563923 kernel: ACPI: PM: Power Resource [USBC] Sep 6 00:59:12.563928 kernel: ACPI: PM: Power Resource [V0PR] Sep 6 00:59:12.563933 kernel: ACPI: PM: Power Resource [V1PR] Sep 6 00:59:12.563939 kernel: ACPI: PM: Power Resource [V2PR] Sep 6 00:59:12.563943 kernel: ACPI: PM: Power Resource [WRST] Sep 6 00:59:12.563948 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 6 00:59:12.563953 kernel: ACPI: PM: Power Resource [FN00] Sep 6 00:59:12.563958 kernel: ACPI: PM: Power Resource [FN01] Sep 6 00:59:12.563963 kernel: ACPI: PM: Power Resource [FN02] Sep 6 00:59:12.563968 kernel: ACPI: PM: Power Resource [FN03] Sep 6 00:59:12.563973 kernel: ACPI: PM: Power Resource [FN04] Sep 6 00:59:12.563978 kernel: ACPI: PM: Power Resource [PIN] Sep 6 00:59:12.563984 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 6 00:59:12.564052 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:59:12.564100 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 6 00:59:12.564144 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 6 00:59:12.564152 kernel: PCI host bridge to bus 0000:00 Sep 6 00:59:12.564196 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:59:12.564237 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:59:12.564279 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:59:12.564317 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Sep 6 00:59:12.564356 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 6 00:59:12.564394 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 6 00:59:12.564449 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 6 00:59:12.564503 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 6 00:59:12.564551 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 6 00:59:12.564601 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Sep 6 00:59:12.564646 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Sep 6 00:59:12.564695 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Sep 6 00:59:12.564740 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Sep 6 00:59:12.564785 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Sep 6 00:59:12.564831 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Sep 6 00:59:12.564883 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 6 00:59:12.564929 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Sep 6 00:59:12.564976 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 6 00:59:12.565022 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Sep 6 00:59:12.565070 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 6 00:59:12.565115 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Sep 6 00:59:12.565161 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 6 00:59:12.565213 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 6 00:59:12.565256 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Sep 6 00:59:12.565300 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Sep 6 00:59:12.565349 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 6 00:59:12.565392 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 6 00:59:12.565445 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 6 00:59:12.565490 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 6 00:59:12.565537 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 6 00:59:12.565582 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Sep 6 00:59:12.565625 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 6 00:59:12.565674 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 6 00:59:12.565725 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Sep 6 00:59:12.565771 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 6 00:59:12.565818 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 6 00:59:12.565865 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Sep 6 00:59:12.565908 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 6 00:59:12.565957 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 6 00:59:12.566001 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Sep 6 00:59:12.566048 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Sep 6 00:59:12.566092 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Sep 6 00:59:12.566135 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Sep 6 00:59:12.566179 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Sep 6 00:59:12.566223 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Sep 6 00:59:12.566267 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 6 00:59:12.566318 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 6 00:59:12.566366 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 6 00:59:12.566416 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 6 00:59:12.566465 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 6 00:59:12.566516 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 6 00:59:12.566561 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 6 00:59:12.566611 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 6 00:59:12.566655 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 6 00:59:12.566704 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Sep 6 00:59:12.566749 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Sep 6 00:59:12.566797 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 6 00:59:12.566844 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 6 00:59:12.566895 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 6 00:59:12.566944 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 6 00:59:12.566988 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Sep 6 00:59:12.567032 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 6 00:59:12.567079 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 6 00:59:12.567125 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 6 00:59:12.567171 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 6 00:59:12.567221 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Sep 6 00:59:12.567268 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 6 00:59:12.567313 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Sep 6 00:59:12.567360 kernel: pci 0000:02:00.0: PME# supported from D3cold Sep 6 00:59:12.567406 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 6 00:59:12.567458 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 6 00:59:12.567509 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Sep 6 00:59:12.567557 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 6 00:59:12.567603 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Sep 6 00:59:12.567669 kernel: pci 0000:02:00.1: PME# supported from D3cold Sep 6 00:59:12.567714 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 6 00:59:12.567759 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 6 00:59:12.567806 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 6 00:59:12.567850 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Sep 6 00:59:12.567895 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 6 00:59:12.567938 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 6 00:59:12.567988 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 6 00:59:12.568037 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 6 00:59:12.568132 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Sep 6 00:59:12.568178 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Sep 6 00:59:12.568226 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Sep 6 00:59:12.568270 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 6 00:59:12.568315 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 6 00:59:12.568358 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 6 00:59:12.568402 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Sep 6 00:59:12.568492 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Sep 6 00:59:12.568537 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Sep 6 00:59:12.568585 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Sep 6 00:59:12.568630 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Sep 6 00:59:12.568675 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Sep 6 00:59:12.568720 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Sep 6 00:59:12.568765 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 6 00:59:12.568808 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 6 00:59:12.568852 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Sep 6 00:59:12.568897 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 6 00:59:12.568946 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Sep 6 00:59:12.568992 kernel: pci 0000:07:00.0: enabling Extended Tags Sep 6 00:59:12.569036 kernel: pci 0000:07:00.0: supports D1 D2 Sep 6 00:59:12.569082 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:59:12.569126 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 6 00:59:12.569171 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 6 00:59:12.569214 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Sep 6 00:59:12.569267 kernel: pci_bus 0000:08: extended config space not accessible Sep 6 00:59:12.569318 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Sep 6 00:59:12.569367 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Sep 6 00:59:12.569414 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Sep 6 00:59:12.569501 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Sep 6 00:59:12.569550 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:59:12.569597 kernel: pci 0000:08:00.0: supports D1 D2 Sep 6 00:59:12.569647 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 00:59:12.569693 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 6 00:59:12.569738 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 6 00:59:12.569784 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Sep 6 00:59:12.569792 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 6 00:59:12.569798 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 6 00:59:12.569803 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 6 00:59:12.569808 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 6 00:59:12.569815 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 6 00:59:12.569820 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 6 00:59:12.569825 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 6 00:59:12.569830 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 6 00:59:12.569836 kernel: iommu: Default domain type: Translated Sep 6 00:59:12.569841 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:59:12.569887 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Sep 6 00:59:12.569935 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:59:12.569982 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Sep 6 00:59:12.569992 kernel: vgaarb: loaded Sep 6 00:59:12.569997 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:59:12.570002 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:59:12.570008 kernel: PTP clock support registered Sep 6 00:59:12.570013 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:59:12.570018 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:59:12.570023 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 6 00:59:12.570029 kernel: e820: reserve RAM buffer [mem 0x6dfbe000-0x6fffffff] Sep 6 00:59:12.570034 kernel: e820: reserve RAM buffer [mem 0x77fc7000-0x77ffffff] Sep 6 00:59:12.570040 kernel: e820: reserve RAM buffer [mem 0x79233000-0x7bffffff] Sep 6 00:59:12.570045 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Sep 6 00:59:12.570050 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Sep 6 00:59:12.570055 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 6 00:59:12.570060 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Sep 6 00:59:12.570065 kernel: clocksource: Switched to clocksource tsc-early Sep 6 00:59:12.570071 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:59:12.570076 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:59:12.570082 kernel: pnp: PnP ACPI init Sep 6 00:59:12.570128 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 6 00:59:12.570175 kernel: pnp 00:02: [dma 0 disabled] Sep 6 00:59:12.570218 kernel: pnp 00:03: [dma 0 disabled] Sep 6 00:59:12.570264 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 6 00:59:12.570304 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 6 00:59:12.570347 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 6 00:59:12.570392 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 6 00:59:12.570458 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 6 00:59:12.570513 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 6 00:59:12.570552 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 6 00:59:12.570592 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 6 00:59:12.570631 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 6 00:59:12.570671 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 6 00:59:12.570712 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 6 00:59:12.570755 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 6 00:59:12.570796 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 6 00:59:12.570836 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 6 00:59:12.570875 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 6 00:59:12.570914 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 6 00:59:12.570956 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 6 00:59:12.570996 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 6 00:59:12.571039 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 6 00:59:12.571047 kernel: pnp: PnP ACPI: found 10 devices Sep 6 00:59:12.571052 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:59:12.571058 kernel: NET: Registered PF_INET protocol family Sep 6 00:59:12.571063 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:59:12.571068 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 00:59:12.571075 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:59:12.571081 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:59:12.571086 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 6 00:59:12.571091 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 6 00:59:12.571097 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 6 00:59:12.571102 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 6 00:59:12.571107 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:59:12.571112 kernel: NET: Registered PF_XDP protocol family Sep 6 00:59:12.571156 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Sep 6 00:59:12.571204 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Sep 6 00:59:12.571248 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Sep 6 00:59:12.571292 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 6 00:59:12.571340 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 6 00:59:12.571387 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 6 00:59:12.571457 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 6 00:59:12.571504 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 6 00:59:12.571550 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 6 00:59:12.571596 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Sep 6 00:59:12.571641 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 6 00:59:12.571686 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 6 00:59:12.571731 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 6 00:59:12.571779 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 6 00:59:12.571824 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Sep 6 00:59:12.571870 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 6 00:59:12.571914 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 6 00:59:12.571959 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Sep 6 00:59:12.572003 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 6 00:59:12.572050 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 6 00:59:12.572097 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 6 00:59:12.572144 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Sep 6 00:59:12.572191 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 6 00:59:12.572236 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 6 00:59:12.572280 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Sep 6 00:59:12.572322 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 6 00:59:12.572362 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:59:12.572404 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:59:12.572447 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:59:12.572486 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Sep 6 00:59:12.572525 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 6 00:59:12.572576 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Sep 6 00:59:12.572619 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 6 00:59:12.572684 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Sep 6 00:59:12.572725 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Sep 6 00:59:12.572771 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 6 00:59:12.572812 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Sep 6 00:59:12.572859 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 6 00:59:12.572900 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Sep 6 00:59:12.572943 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Sep 6 00:59:12.572986 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Sep 6 00:59:12.572994 kernel: PCI: CLS 64 bytes, default 64 Sep 6 00:59:12.573000 kernel: DMAR: No ATSR found Sep 6 00:59:12.573005 kernel: DMAR: No SATC found Sep 6 00:59:12.573011 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Sep 6 00:59:12.573017 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Sep 6 00:59:12.573022 kernel: DMAR: IOMMU feature nwfs inconsistent Sep 6 00:59:12.573027 kernel: DMAR: IOMMU feature pasid inconsistent Sep 6 00:59:12.573032 kernel: DMAR: IOMMU feature eafs inconsistent Sep 6 00:59:12.573038 kernel: DMAR: IOMMU feature prs inconsistent Sep 6 00:59:12.573043 kernel: DMAR: IOMMU feature nest inconsistent Sep 6 00:59:12.573048 kernel: DMAR: IOMMU feature mts inconsistent Sep 6 00:59:12.573054 kernel: DMAR: IOMMU feature sc_support inconsistent Sep 6 00:59:12.573060 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Sep 6 00:59:12.573065 kernel: DMAR: dmar0: Using Queued invalidation Sep 6 00:59:12.573070 kernel: DMAR: dmar1: Using Queued invalidation Sep 6 00:59:12.573114 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 6 00:59:12.573159 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 6 00:59:12.573204 kernel: pci 0000:00:01.1: Adding to iommu group 1 Sep 6 00:59:12.573249 kernel: pci 0000:00:02.0: Adding to iommu group 2 Sep 6 00:59:12.573293 kernel: pci 0000:00:08.0: Adding to iommu group 3 Sep 6 00:59:12.573337 kernel: pci 0000:00:12.0: Adding to iommu group 4 Sep 6 00:59:12.573383 kernel: pci 0000:00:14.0: Adding to iommu group 5 Sep 6 00:59:12.573429 kernel: pci 0000:00:14.2: Adding to iommu group 5 Sep 6 00:59:12.573515 kernel: pci 0000:00:15.0: Adding to iommu group 6 Sep 6 00:59:12.573559 kernel: pci 0000:00:15.1: Adding to iommu group 6 Sep 6 00:59:12.573602 kernel: pci 0000:00:16.0: Adding to iommu group 7 Sep 6 00:59:12.573645 kernel: pci 0000:00:16.1: Adding to iommu group 7 Sep 6 00:59:12.573689 kernel: pci 0000:00:16.4: Adding to iommu group 7 Sep 6 00:59:12.573733 kernel: pci 0000:00:17.0: Adding to iommu group 8 Sep 6 00:59:12.573778 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Sep 6 00:59:12.573822 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Sep 6 00:59:12.573865 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Sep 6 00:59:12.573910 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Sep 6 00:59:12.573953 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Sep 6 00:59:12.573996 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Sep 6 00:59:12.574040 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Sep 6 00:59:12.574083 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Sep 6 00:59:12.574128 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Sep 6 00:59:12.574173 kernel: pci 0000:02:00.0: Adding to iommu group 1 Sep 6 00:59:12.574219 kernel: pci 0000:02:00.1: Adding to iommu group 1 Sep 6 00:59:12.574265 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 6 00:59:12.574311 kernel: pci 0000:05:00.0: Adding to iommu group 17 Sep 6 00:59:12.574357 kernel: pci 0000:07:00.0: Adding to iommu group 18 Sep 6 00:59:12.574405 kernel: pci 0000:08:00.0: Adding to iommu group 18 Sep 6 00:59:12.574413 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 6 00:59:12.574440 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 6 00:59:12.574446 kernel: software IO TLB: mapped [mem 0x0000000073fc7000-0x0000000077fc7000] (64MB) Sep 6 00:59:12.574452 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Sep 6 00:59:12.574476 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 6 00:59:12.574481 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 6 00:59:12.574487 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 6 00:59:12.574492 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Sep 6 00:59:12.574540 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 6 00:59:12.574549 kernel: Initialise system trusted keyrings Sep 6 00:59:12.574556 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 6 00:59:12.574561 kernel: Key type asymmetric registered Sep 6 00:59:12.574566 kernel: Asymmetric key parser 'x509' registered Sep 6 00:59:12.574572 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:59:12.574577 kernel: io scheduler mq-deadline registered Sep 6 00:59:12.574582 kernel: io scheduler kyber registered Sep 6 00:59:12.574587 kernel: io scheduler bfq registered Sep 6 00:59:12.574632 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Sep 6 00:59:12.574678 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Sep 6 00:59:12.574722 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Sep 6 00:59:12.574767 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Sep 6 00:59:12.574811 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Sep 6 00:59:12.574855 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Sep 6 00:59:12.574899 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Sep 6 00:59:12.574948 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 6 00:59:12.574957 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 6 00:59:12.574963 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 6 00:59:12.574968 kernel: pstore: Registered erst as persistent store backend Sep 6 00:59:12.574973 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:59:12.574979 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:59:12.574984 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:59:12.574990 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 6 00:59:12.575034 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 6 00:59:12.575042 kernel: i8042: PNP: No PS/2 controller found. Sep 6 00:59:12.575084 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 6 00:59:12.575125 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 6 00:59:12.575165 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-06T00:59:11 UTC (1757120351) Sep 6 00:59:12.575206 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 6 00:59:12.575214 kernel: intel_pstate: Intel P-state driver initializing Sep 6 00:59:12.575219 kernel: intel_pstate: Disabling energy efficiency optimization Sep 6 00:59:12.575225 kernel: intel_pstate: HWP enabled Sep 6 00:59:12.575230 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 6 00:59:12.575237 kernel: vesafb: scrolling: redraw Sep 6 00:59:12.575242 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 6 00:59:12.575247 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x0000000080c05ab6, using 768k, total 768k Sep 6 00:59:12.575253 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 00:59:12.575258 kernel: fb0: VESA VGA frame buffer device Sep 6 00:59:12.575263 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:59:12.575268 kernel: Segment Routing with IPv6 Sep 6 00:59:12.575274 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:59:12.575279 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:59:12.575285 kernel: Key type dns_resolver registered Sep 6 00:59:12.575290 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Sep 6 00:59:12.575295 kernel: microcode: Microcode Update Driver: v2.2. Sep 6 00:59:12.575301 kernel: IPI shorthand broadcast: enabled Sep 6 00:59:12.575306 kernel: sched_clock: Marking stable (1855933729, 1360111684)->(4640634081, -1424588668) Sep 6 00:59:12.575311 kernel: registered taskstats version 1 Sep 6 00:59:12.575317 kernel: Loading compiled-in X.509 certificates Sep 6 00:59:12.575322 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:59:12.575327 kernel: Key type .fscrypt registered Sep 6 00:59:12.575333 kernel: Key type fscrypt-provisioning registered Sep 6 00:59:12.575338 kernel: pstore: Using crash dump compression: deflate Sep 6 00:59:12.575344 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:59:12.575349 kernel: ima: No architecture policies found Sep 6 00:59:12.575354 kernel: clk: Disabling unused clocks Sep 6 00:59:12.575359 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:59:12.575365 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:59:12.575370 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:59:12.575375 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:59:12.575382 kernel: Run /init as init process Sep 6 00:59:12.575387 kernel: with arguments: Sep 6 00:59:12.575392 kernel: /init Sep 6 00:59:12.575397 kernel: with environment: Sep 6 00:59:12.575403 kernel: HOME=/ Sep 6 00:59:12.575408 kernel: TERM=linux Sep 6 00:59:12.575413 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:59:12.575422 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:59:12.575451 systemd[1]: Detected architecture x86-64. Sep 6 00:59:12.575470 systemd[1]: Running in initrd. Sep 6 00:59:12.575475 systemd[1]: No hostname configured, using default hostname. Sep 6 00:59:12.575481 systemd[1]: Hostname set to . Sep 6 00:59:12.575486 systemd[1]: Initializing machine ID from random generator. Sep 6 00:59:12.575492 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:59:12.575497 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:59:12.575503 systemd[1]: Reached target cryptsetup.target. Sep 6 00:59:12.575509 systemd[1]: Reached target paths.target. Sep 6 00:59:12.575514 systemd[1]: Reached target slices.target. Sep 6 00:59:12.575519 systemd[1]: Reached target swap.target. Sep 6 00:59:12.575525 systemd[1]: Reached target timers.target. Sep 6 00:59:12.575530 systemd[1]: Listening on iscsid.socket. Sep 6 00:59:12.575535 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:59:12.575541 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:59:12.575547 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:59:12.575553 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:59:12.575558 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:59:12.575564 kernel: tsc: Refined TSC clocksource calibration: 3408.012 MHz Sep 6 00:59:12.575569 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fdefef86, max_idle_ns: 440795247444 ns Sep 6 00:59:12.575575 kernel: clocksource: Switched to clocksource tsc Sep 6 00:59:12.575580 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:59:12.575585 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:59:12.575591 systemd[1]: Reached target sockets.target. Sep 6 00:59:12.575597 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:59:12.575603 systemd[1]: Finished network-cleanup.service. Sep 6 00:59:12.575608 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:59:12.575613 systemd[1]: Starting systemd-journald.service... Sep 6 00:59:12.575619 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:59:12.575628 systemd-journald[269]: Journal started Sep 6 00:59:12.575655 systemd-journald[269]: Runtime Journal (/run/log/journal/3180421188b14412a8b62da23b87ecd5) is 8.0M, max 639.3M, 631.3M free. Sep 6 00:59:12.577630 systemd-modules-load[270]: Inserted module 'overlay' Sep 6 00:59:12.582000 audit: BPF prog-id=6 op=LOAD Sep 6 00:59:12.600465 kernel: audit: type=1334 audit(1757120352.582:2): prog-id=6 op=LOAD Sep 6 00:59:12.600480 systemd[1]: Starting systemd-resolved.service... Sep 6 00:59:12.650472 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:59:12.650519 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:59:12.683464 kernel: Bridge firewalling registered Sep 6 00:59:12.683480 systemd[1]: Started systemd-journald.service. Sep 6 00:59:12.697855 systemd-modules-load[270]: Inserted module 'br_netfilter' Sep 6 00:59:12.745219 kernel: audit: type=1130 audit(1757120352.705:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.700603 systemd-resolved[271]: Positive Trust Anchors: Sep 6 00:59:12.809465 kernel: SCSI subsystem initialized Sep 6 00:59:12.809554 kernel: audit: type=1130 audit(1757120352.756:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.700610 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:59:12.904175 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:59:12.904188 kernel: audit: type=1130 audit(1757120352.829:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.904195 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:59:12.904202 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:59:12.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.700630 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:59:13.017550 kernel: audit: type=1130 audit(1757120352.930:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.702220 systemd-resolved[271]: Defaulting to hostname 'linux'. Sep 6 00:59:13.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.705663 systemd[1]: Started systemd-resolved.service. Sep 6 00:59:13.124990 kernel: audit: type=1130 audit(1757120353.025:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:13.125076 kernel: audit: type=1130 audit(1757120353.078:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:13.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:12.756598 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:59:12.829589 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:59:12.928179 systemd-modules-load[270]: Inserted module 'dm_multipath' Sep 6 00:59:12.930682 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:59:13.025663 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:59:13.078681 systemd[1]: Reached target nss-lookup.target. Sep 6 00:59:13.134020 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:59:13.154027 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:59:13.154325 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:59:13.157226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:59:13.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:13.157903 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:59:13.206495 kernel: audit: type=1130 audit(1757120353.156:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:13.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:13.218742 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:59:13.284458 kernel: audit: type=1130 audit(1757120353.218:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:13.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:13.276018 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:59:13.296509 dracut-cmdline[295]: dracut-dracut-053 Sep 6 00:59:13.296509 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 6 00:59:13.296509 dracut-cmdline[295]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:59:13.368507 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:59:13.368519 kernel: iscsi: registered transport (tcp) Sep 6 00:59:13.433577 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:59:13.433596 kernel: QLogic iSCSI HBA Driver Sep 6 00:59:13.449247 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:59:13.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:13.458174 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:59:13.514483 kernel: raid6: avx2x4 gen() 48818 MB/s Sep 6 00:59:13.549453 kernel: raid6: avx2x4 xor() 21791 MB/s Sep 6 00:59:13.584449 kernel: raid6: avx2x2 gen() 53545 MB/s Sep 6 00:59:13.619450 kernel: raid6: avx2x2 xor() 32078 MB/s Sep 6 00:59:13.654450 kernel: raid6: avx2x1 gen() 45143 MB/s Sep 6 00:59:13.689450 kernel: raid6: avx2x1 xor() 27801 MB/s Sep 6 00:59:13.723478 kernel: raid6: sse2x4 gen() 21303 MB/s Sep 6 00:59:13.757449 kernel: raid6: sse2x4 xor() 11994 MB/s Sep 6 00:59:13.791452 kernel: raid6: sse2x2 gen() 21584 MB/s Sep 6 00:59:13.825483 kernel: raid6: sse2x2 xor() 13400 MB/s Sep 6 00:59:13.859454 kernel: raid6: sse2x1 gen() 18227 MB/s Sep 6 00:59:13.911442 kernel: raid6: sse2x1 xor() 8901 MB/s Sep 6 00:59:13.911458 kernel: raid6: using algorithm avx2x2 gen() 53545 MB/s Sep 6 00:59:13.911466 kernel: raid6: .... xor() 32078 MB/s, rmw enabled Sep 6 00:59:13.929697 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:59:13.976484 kernel: xor: automatically using best checksumming function avx Sep 6 00:59:14.072452 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:59:14.077470 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:59:14.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:14.086000 audit: BPF prog-id=7 op=LOAD Sep 6 00:59:14.086000 audit: BPF prog-id=8 op=LOAD Sep 6 00:59:14.087314 systemd[1]: Starting systemd-udevd.service... Sep 6 00:59:14.094994 systemd-udevd[475]: Using default interface naming scheme 'v252'. Sep 6 00:59:14.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:14.102686 systemd[1]: Started systemd-udevd.service. Sep 6 00:59:14.143554 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Sep 6 00:59:14.118104 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:59:14.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:14.147454 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:59:14.162715 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:59:14.251265 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:59:14.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:14.277426 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:59:14.287431 kernel: libata version 3.00 loaded. Sep 6 00:59:14.287476 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:59:14.323428 kernel: AES CTR mode by8 optimization enabled Sep 6 00:59:14.358197 kernel: ACPI: bus type USB registered Sep 6 00:59:14.358223 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 6 00:59:14.358232 kernel: usbcore: registered new interface driver usbfs Sep 6 00:59:14.358239 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 6 00:59:14.410942 kernel: usbcore: registered new interface driver hub Sep 6 00:59:14.410968 kernel: igb 0000:04:00.0: added PHC on eth0 Sep 6 00:59:14.514494 kernel: usbcore: registered new device driver usb Sep 6 00:59:14.514510 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 6 00:59:14.514641 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:72 Sep 6 00:59:14.514759 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Sep 6 00:59:14.514863 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 6 00:59:14.515424 kernel: ahci 0000:00:17.0: version 3.0 Sep 6 00:59:15.187848 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Sep 6 00:59:15.187925 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Sep 6 00:59:15.187983 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 6 00:59:15.188038 kernel: igb 0000:05:00.0: added PHC on eth1 Sep 6 00:59:15.188092 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 6 00:59:15.188145 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:73 Sep 6 00:59:15.188197 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Sep 6 00:59:15.188249 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 6 00:59:15.188301 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 6 00:59:15.188353 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Sep 6 00:59:15.188406 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 6 00:59:15.188462 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 6 00:59:15.188512 kernel: scsi host0: ahci Sep 6 00:59:15.188572 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 6 00:59:15.188623 kernel: scsi host1: ahci Sep 6 00:59:15.188678 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 6 00:59:15.188731 kernel: scsi host2: ahci Sep 6 00:59:15.188788 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 6 00:59:15.188838 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 6 00:59:15.188888 kernel: scsi host3: ahci Sep 6 00:59:15.188946 kernel: hub 1-0:1.0: USB hub found Sep 6 00:59:15.189005 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Sep 6 00:59:15.189059 kernel: scsi host4: ahci Sep 6 00:59:15.189114 kernel: hub 1-0:1.0: 16 ports detected Sep 6 00:59:15.189171 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 6 00:59:15.189223 kernel: scsi host5: ahci Sep 6 00:59:15.189277 kernel: hub 2-0:1.0: USB hub found Sep 6 00:59:15.189335 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 6 00:59:15.189388 kernel: scsi host6: ahci Sep 6 00:59:15.189454 kernel: hub 2-0:1.0: 10 ports detected Sep 6 00:59:15.189511 kernel: scsi host7: ahci Sep 6 00:59:15.189569 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 139 Sep 6 00:59:15.189577 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 139 Sep 6 00:59:15.189584 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 139 Sep 6 00:59:15.189591 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 139 Sep 6 00:59:15.189597 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 139 Sep 6 00:59:15.189604 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 6 00:59:15.189658 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 139 Sep 6 00:59:15.189666 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Sep 6 00:59:15.803942 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 6 00:59:15.804140 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 139 Sep 6 00:59:15.804161 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 6 00:59:15.804258 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 139 Sep 6 00:59:15.804269 kernel: hub 1-14:1.0: USB hub found Sep 6 00:59:15.804365 kernel: hub 1-14:1.0: 4 ports detected Sep 6 00:59:15.804463 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 6 00:59:15.804542 kernel: port_module: 9 callbacks suppressed Sep 6 00:59:15.804554 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Sep 6 00:59:15.804630 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 00:59:15.804641 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 6 00:59:15.804717 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 6 00:59:15.804730 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 6 00:59:15.804874 kernel: ata8: SATA link down (SStatus 0 SControl 300) Sep 6 00:59:15.804886 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 00:59:15.804896 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 6 00:59:15.804905 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 00:59:15.804914 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 6 00:59:15.804924 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 6 00:59:15.804933 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 6 00:59:15.804945 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 6 00:59:15.804954 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:59:15.804964 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 6 00:59:15.804974 kernel: ata2.00: Features: NCQ-prio Sep 6 00:59:15.804983 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 6 00:59:15.804992 kernel: ata1.00: Features: NCQ-prio Sep 6 00:59:15.805002 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 6 00:59:15.805085 kernel: ata2.00: configured for UDMA/133 Sep 6 00:59:15.825479 kernel: ata1.00: configured for UDMA/133 Sep 6 00:59:15.825499 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 6 00:59:15.862425 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 6 00:59:15.881424 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Sep 6 00:59:15.912066 kernel: usbcore: registered new interface driver usbhid Sep 6 00:59:15.912102 kernel: usbhid: USB HID core driver Sep 6 00:59:15.930428 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Sep 6 00:59:15.930527 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 6 00:59:15.963424 kernel: ata2.00: Enabling discard_zeroes_data Sep 6 00:59:15.978391 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 00:59:15.993280 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 6 00:59:16.285068 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 6 00:59:16.432237 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 6 00:59:16.432406 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 6 00:59:16.432534 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 6 00:59:16.432547 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 6 00:59:16.432662 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Sep 6 00:59:16.432742 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 6 00:59:16.432807 kernel: sd 0:0:0:0: [sdb] Write Protect is off Sep 6 00:59:16.432867 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 6 00:59:16.432928 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 6 00:59:16.432987 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 6 00:59:16.433057 kernel: ata2.00: Enabling discard_zeroes_data Sep 6 00:59:16.433070 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 6 00:59:16.433170 kernel: ata2.00: Enabling discard_zeroes_data Sep 6 00:59:16.433182 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 6 00:59:16.433282 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 00:59:16.433295 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:59:16.433307 kernel: GPT:9289727 != 937703087 Sep 6 00:59:16.433318 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:59:16.433330 kernel: GPT:9289727 != 937703087 Sep 6 00:59:16.433340 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:59:16.433354 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 6 00:59:16.433366 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 00:59:16.433377 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Sep 6 00:59:16.464618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:59:16.499653 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (528) Sep 6 00:59:16.493038 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:59:16.510465 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:59:16.517205 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:59:16.556177 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:59:16.570603 systemd[1]: Starting disk-uuid.service... Sep 6 00:59:16.623546 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 00:59:16.623557 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 6 00:59:16.623564 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 00:59:16.623617 disk-uuid[695]: Primary Header is updated. Sep 6 00:59:16.623617 disk-uuid[695]: Secondary Entries is updated. Sep 6 00:59:16.623617 disk-uuid[695]: Secondary Header is updated. Sep 6 00:59:16.681552 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 6 00:59:16.681562 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 00:59:16.681569 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 6 00:59:17.666485 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 00:59:17.685320 disk-uuid[696]: The operation has completed successfully. Sep 6 00:59:17.694559 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 6 00:59:17.723948 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:59:17.818837 kernel: audit: type=1130 audit(1757120357.731:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:17.818852 kernel: audit: type=1131 audit(1757120357.731:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:17.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:17.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:17.723994 systemd[1]: Finished disk-uuid.service. Sep 6 00:59:17.847512 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:59:17.751678 systemd[1]: Starting verity-setup.service... Sep 6 00:59:17.882004 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:59:17.892934 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:59:17.899780 systemd[1]: Finished verity-setup.service. Sep 6 00:59:17.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:17.964423 kernel: audit: type=1130 audit(1757120357.918:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.036495 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:59:18.036799 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:59:18.036908 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:59:18.123500 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:59:18.123512 kernel: BTRFS info (device sdb6): using free space tree Sep 6 00:59:18.123523 kernel: BTRFS info (device sdb6): has skinny extents Sep 6 00:59:18.123530 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 6 00:59:18.037311 systemd[1]: Starting ignition-setup.service... Sep 6 00:59:18.130843 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:59:18.146883 systemd[1]: Finished ignition-setup.service. Sep 6 00:59:18.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.163937 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:59:18.230574 kernel: audit: type=1130 audit(1757120358.162:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.222909 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:59:18.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.286000 audit: BPF prog-id=9 op=LOAD Sep 6 00:59:18.287213 systemd[1]: Starting systemd-networkd.service... Sep 6 00:59:18.321515 kernel: audit: type=1130 audit(1757120358.238:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.321529 kernel: audit: type=1334 audit(1757120358.286:24): prog-id=9 op=LOAD Sep 6 00:59:18.322842 systemd-networkd[882]: lo: Link UP Sep 6 00:59:18.322845 systemd-networkd[882]: lo: Gained carrier Sep 6 00:59:18.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.328973 ignition[867]: Ignition 2.14.0 Sep 6 00:59:18.392592 kernel: audit: type=1130 audit(1757120358.329:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.323262 systemd-networkd[882]: Enumeration completed Sep 6 00:59:18.328978 ignition[867]: Stage: fetch-offline Sep 6 00:59:18.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.323335 systemd[1]: Started systemd-networkd.service. Sep 6 00:59:18.537634 kernel: audit: type=1130 audit(1757120358.413:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.537648 kernel: audit: type=1130 audit(1757120358.469:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.537656 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 6 00:59:18.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.329006 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:59:18.571599 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Sep 6 00:59:18.323994 systemd-networkd[882]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:59:18.329019 ignition[867]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 00:59:18.329546 systemd[1]: Reached target network.target. Sep 6 00:59:18.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.360033 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 00:59:18.361868 unknown[867]: fetched base config from "system" Sep 6 00:59:18.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.638777 iscsid[902]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:59:18.638777 iscsid[902]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:59:18.638777 iscsid[902]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:59:18.638777 iscsid[902]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:59:18.638777 iscsid[902]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:59:18.638777 iscsid[902]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:59:18.638777 iscsid[902]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:59:18.792635 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 6 00:59:18.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:18.360110 ignition[867]: parsed url from cmdline: "" Sep 6 00:59:18.361873 unknown[867]: fetched user config from "system" Sep 6 00:59:18.360112 ignition[867]: no config URL provided Sep 6 00:59:18.387649 systemd[1]: Starting iscsiuio.service... Sep 6 00:59:18.360115 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:59:18.399744 systemd[1]: Started iscsiuio.service. Sep 6 00:59:18.360140 ignition[867]: parsing config with SHA512: 0ceed95231691fb8146a5bf9a84fc86dec308ce5e638045c987b0fb4d63ca7cf9e8fe65f7dce0975d11763b93ca2ca52abb6ba7f045d93924a15d6fb2a67f01d Sep 6 00:59:18.414112 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:59:18.362170 ignition[867]: fetch-offline: fetch-offline passed Sep 6 00:59:18.469646 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:59:18.362172 ignition[867]: POST message to Packet Timeline Sep 6 00:59:18.470103 systemd[1]: Starting ignition-kargs.service... Sep 6 00:59:18.362178 ignition[867]: POST Status error: resource requires networking Sep 6 00:59:18.538432 systemd-networkd[882]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:59:18.362214 ignition[867]: Ignition finished successfully Sep 6 00:59:18.552010 systemd[1]: Starting iscsid.service... Sep 6 00:59:18.541988 ignition[892]: Ignition 2.14.0 Sep 6 00:59:18.578707 systemd[1]: Started iscsid.service. Sep 6 00:59:18.541991 ignition[892]: Stage: kargs Sep 6 00:59:18.599754 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:59:18.542047 ignition[892]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:59:18.616529 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:59:18.542058 ignition[892]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 00:59:18.630894 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:59:18.543371 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 00:59:18.646673 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:59:18.544756 ignition[892]: kargs: kargs passed Sep 6 00:59:18.666637 systemd[1]: Reached target remote-fs.target. Sep 6 00:59:18.544759 ignition[892]: POST message to Packet Timeline Sep 6 00:59:18.729042 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:59:18.544770 ignition[892]: GET https://metadata.packet.net/metadata: attempt #1 Sep 6 00:59:18.740004 systemd-networkd[882]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:59:18.546558 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59956->[::1]:53: read: connection refused Sep 6 00:59:18.763139 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:59:18.746914 ignition[892]: GET https://metadata.packet.net/metadata: attempt #2 Sep 6 00:59:18.770261 systemd-networkd[882]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:59:18.747300 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39467->[::1]:53: read: connection refused Sep 6 00:59:18.803190 systemd-networkd[882]: enp2s0f1np1: Link UP Sep 6 00:59:18.803687 systemd-networkd[882]: enp2s0f1np1: Gained carrier Sep 6 00:59:18.816964 systemd-networkd[882]: enp2s0f0np0: Link UP Sep 6 00:59:18.817379 systemd-networkd[882]: eno2: Link UP Sep 6 00:59:18.817789 systemd-networkd[882]: eno1: Link UP Sep 6 00:59:19.148037 ignition[892]: GET https://metadata.packet.net/metadata: attempt #3 Sep 6 00:59:19.149233 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42125->[::1]:53: read: connection refused Sep 6 00:59:19.579796 systemd-networkd[882]: enp2s0f0np0: Gained carrier Sep 6 00:59:19.588637 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Sep 6 00:59:19.613558 systemd-networkd[882]: enp2s0f0np0: DHCPv4 address 139.178.90.135/31, gateway 139.178.90.134 acquired from 145.40.83.140 Sep 6 00:59:19.878645 systemd-networkd[882]: enp2s0f1np1: Gained IPv6LL Sep 6 00:59:19.949649 ignition[892]: GET https://metadata.packet.net/metadata: attempt #4 Sep 6 00:59:19.950768 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45165->[::1]:53: read: connection refused Sep 6 00:59:20.646648 systemd-networkd[882]: enp2s0f0np0: Gained IPv6LL Sep 6 00:59:21.552448 ignition[892]: GET https://metadata.packet.net/metadata: attempt #5 Sep 6 00:59:21.553881 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37143->[::1]:53: read: connection refused Sep 6 00:59:24.757048 ignition[892]: GET https://metadata.packet.net/metadata: attempt #6 Sep 6 00:59:25.822241 ignition[892]: GET result: OK Sep 6 00:59:26.251750 ignition[892]: Ignition finished successfully Sep 6 00:59:26.255934 systemd[1]: Finished ignition-kargs.service. Sep 6 00:59:26.339227 kernel: kauditd_printk_skb: 3 callbacks suppressed Sep 6 00:59:26.339243 kernel: audit: type=1130 audit(1757120366.267:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:26.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:26.276613 ignition[920]: Ignition 2.14.0 Sep 6 00:59:26.269563 systemd[1]: Starting ignition-disks.service... Sep 6 00:59:26.276617 ignition[920]: Stage: disks Sep 6 00:59:26.276689 ignition[920]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:59:26.276699 ignition[920]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 00:59:26.278059 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 00:59:26.279680 ignition[920]: disks: disks passed Sep 6 00:59:26.279683 ignition[920]: POST message to Packet Timeline Sep 6 00:59:26.279693 ignition[920]: GET https://metadata.packet.net/metadata: attempt #1 Sep 6 00:59:27.332177 ignition[920]: GET result: OK Sep 6 00:59:28.038480 ignition[920]: Ignition finished successfully Sep 6 00:59:28.041135 systemd[1]: Finished ignition-disks.service. Sep 6 00:59:28.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:28.055016 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:59:28.129616 kernel: audit: type=1130 audit(1757120368.054:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:28.115616 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:59:28.115724 systemd[1]: Reached target local-fs.target. Sep 6 00:59:28.129739 systemd[1]: Reached target sysinit.target. Sep 6 00:59:28.155553 systemd[1]: Reached target basic.target. Sep 6 00:59:28.156189 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:59:28.192479 systemd-fsck[936]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:59:28.203830 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:59:28.291277 kernel: audit: type=1130 audit(1757120368.211:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:28.291292 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:59:28.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:28.217827 systemd[1]: Mounting sysroot.mount... Sep 6 00:59:28.299128 systemd[1]: Mounted sysroot.mount. Sep 6 00:59:28.312748 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:59:28.320348 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:59:28.334305 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 00:59:28.356809 systemd[1]: Starting flatcar-static-network.service... Sep 6 00:59:28.371478 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:59:28.371500 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:59:28.391275 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:59:28.413620 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:59:28.484512 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (949) Sep 6 00:59:28.484537 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:59:28.427180 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:59:28.554154 kernel: BTRFS info (device sdb6): using free space tree Sep 6 00:59:28.554173 kernel: BTRFS info (device sdb6): has skinny extents Sep 6 00:59:28.554181 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 6 00:59:28.489517 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:59:28.615557 kernel: audit: type=1130 audit(1757120368.562:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:28.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:28.615595 coreos-metadata[943]: Sep 06 00:59:28.492 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 6 00:59:28.636646 coreos-metadata[944]: Sep 06 00:59:28.492 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 6 00:59:28.655676 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:59:28.563710 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:59:28.680647 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:59:28.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:28.623998 systemd[1]: Starting ignition-mount.service... Sep 6 00:59:28.749611 kernel: audit: type=1130 audit(1757120368.688:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:28.749672 initrd-setup-root[970]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:59:28.643978 systemd[1]: Starting sysroot-boot.service... Sep 6 00:59:28.766653 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:59:28.664603 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 00:59:28.786557 ignition[1018]: INFO : Ignition 2.14.0 Sep 6 00:59:28.786557 ignition[1018]: INFO : Stage: mount Sep 6 00:59:28.786557 ignition[1018]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:59:28.786557 ignition[1018]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 00:59:28.786557 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 00:59:28.786557 ignition[1018]: INFO : mount: mount passed Sep 6 00:59:28.786557 ignition[1018]: INFO : POST message to Packet Timeline Sep 6 00:59:28.786557 ignition[1018]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 6 00:59:28.664820 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 00:59:28.678489 systemd[1]: Finished sysroot-boot.service. Sep 6 00:59:29.500113 coreos-metadata[943]: Sep 06 00:59:29.500 INFO Fetch successful Sep 6 00:59:29.515600 coreos-metadata[944]: Sep 06 00:59:29.515 INFO Fetch successful Sep 6 00:59:29.532890 coreos-metadata[943]: Sep 06 00:59:29.532 INFO wrote hostname ci-3510.3.8-n-4cc2a8c2f2 to /sysroot/etc/hostname Sep 6 00:59:29.533411 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 00:59:29.619629 kernel: audit: type=1130 audit(1757120369.554:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:29.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:29.554776 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 6 00:59:29.751622 kernel: audit: type=1130 audit(1757120369.627:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:29.751635 kernel: audit: type=1131 audit(1757120369.627:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:29.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:29.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:29.554815 systemd[1]: Finished flatcar-static-network.service. Sep 6 00:59:29.759614 ignition[1018]: INFO : GET result: OK Sep 6 00:59:30.588186 ignition[1018]: INFO : Ignition finished successfully Sep 6 00:59:30.590638 systemd[1]: Finished ignition-mount.service. Sep 6 00:59:30.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:30.606400 systemd[1]: Starting ignition-files.service... Sep 6 00:59:30.678514 kernel: audit: type=1130 audit(1757120370.604:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:30.672347 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:59:30.735271 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1037) Sep 6 00:59:30.735287 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:59:30.735295 kernel: BTRFS info (device sdb6): using free space tree Sep 6 00:59:30.759078 kernel: BTRFS info (device sdb6): has skinny extents Sep 6 00:59:30.808421 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 6 00:59:30.809581 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:59:30.825597 ignition[1056]: INFO : Ignition 2.14.0 Sep 6 00:59:30.825597 ignition[1056]: INFO : Stage: files Sep 6 00:59:30.825597 ignition[1056]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:59:30.825597 ignition[1056]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 00:59:30.825597 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 00:59:30.829121 unknown[1056]: wrote ssh authorized keys file for user: core Sep 6 00:59:30.890507 ignition[1056]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:59:30.890507 ignition[1056]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:59:30.890507 ignition[1056]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:59:30.890507 ignition[1056]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:59:30.890507 ignition[1056]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:59:30.890507 ignition[1056]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:59:30.890507 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:59:30.890507 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:59:30.890507 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:59:30.890507 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 00:59:30.890507 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:59:31.097004 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:59:31.113631 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2656243005" Sep 6 00:59:31.356655 ignition[1056]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2656243005": device or resource busy Sep 6 00:59:31.356655 ignition[1056]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2656243005", trying btrfs: device or resource busy Sep 6 00:59:31.356655 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2656243005" Sep 6 00:59:31.356655 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2656243005" Sep 6 00:59:31.356655 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2656243005" Sep 6 00:59:31.356655 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2656243005" Sep 6 00:59:31.356655 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 6 00:59:31.356655 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:59:31.356655 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 00:59:31.123023 systemd[1]: mnt-oem2656243005.mount: Deactivated successfully. Sep 6 00:59:31.556302 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Sep 6 00:59:32.055793 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 00:59:32.055793 ignition[1056]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:59:32.055793 ignition[1056]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:59:32.055793 ignition[1056]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Sep 6 00:59:32.055793 ignition[1056]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Sep 6 00:59:32.055793 ignition[1056]: INFO : files: op(12): [started] processing unit "containerd.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(12): [finished] processing unit "containerd.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(17): [started] setting preset to enabled for "packet-phone-home.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(17): [finished] setting preset to enabled for "packet-phone-home.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: createResultFile: createFiles: op(19): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: createResultFile: createFiles: op(19): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:59:32.135628 ignition[1056]: INFO : files: files passed Sep 6 00:59:32.135628 ignition[1056]: INFO : POST message to Packet Timeline Sep 6 00:59:32.135628 ignition[1056]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 6 00:59:33.478058 ignition[1056]: INFO : GET result: OK Sep 6 00:59:34.328642 ignition[1056]: INFO : Ignition finished successfully Sep 6 00:59:34.332889 systemd[1]: Finished ignition-files.service. Sep 6 00:59:34.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.352607 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:59:34.423649 kernel: audit: type=1130 audit(1757120374.346:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.413661 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:59:34.447589 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:59:34.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.413999 systemd[1]: Starting ignition-quench.service... Sep 6 00:59:34.636659 kernel: audit: type=1130 audit(1757120374.457:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.636686 kernel: audit: type=1130 audit(1757120374.523:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.636703 kernel: audit: type=1131 audit(1757120374.523:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.430778 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:59:34.457672 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:59:34.457727 systemd[1]: Finished ignition-quench.service. Sep 6 00:59:34.800884 kernel: audit: type=1130 audit(1757120374.675:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.800898 kernel: audit: type=1131 audit(1757120374.675:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.523652 systemd[1]: Reached target ignition-complete.target. Sep 6 00:59:34.644905 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:59:34.658724 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:59:34.658766 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:59:34.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.676018 systemd[1]: Reached target initrd-fs.target. Sep 6 00:59:34.931625 kernel: audit: type=1130 audit(1757120374.858:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.810622 systemd[1]: Reached target initrd.target. Sep 6 00:59:34.825642 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:59:34.826003 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:59:34.841737 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:59:34.859080 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:59:34.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.926315 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:59:35.071614 kernel: audit: type=1131 audit(1757120374.992:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:34.939664 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:59:34.964623 systemd[1]: Stopped target timers.target. Sep 6 00:59:34.972676 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:59:34.972769 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:59:34.992806 systemd[1]: Stopped target initrd.target. Sep 6 00:59:35.064637 systemd[1]: Stopped target basic.target. Sep 6 00:59:35.071715 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:59:35.099541 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:59:35.115869 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:59:35.130883 systemd[1]: Stopped target remote-fs.target. Sep 6 00:59:35.146868 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:59:35.161892 systemd[1]: Stopped target sysinit.target. Sep 6 00:59:35.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.178884 systemd[1]: Stopped target local-fs.target. Sep 6 00:59:35.326619 kernel: audit: type=1131 audit(1757120375.241:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.194871 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:59:35.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.211876 systemd[1]: Stopped target swap.target. Sep 6 00:59:35.409617 kernel: audit: type=1131 audit(1757120375.334:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.225775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:59:35.226082 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:59:35.242054 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:59:35.319622 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:59:35.319683 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:59:35.334711 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:59:35.334784 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:59:35.402721 systemd[1]: Stopped target paths.target. Sep 6 00:59:35.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.416636 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:59:35.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.420647 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:59:35.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.438543 systemd[1]: Stopped target slices.target. Sep 6 00:59:35.572544 ignition[1105]: INFO : Ignition 2.14.0 Sep 6 00:59:35.572544 ignition[1105]: INFO : Stage: umount Sep 6 00:59:35.572544 ignition[1105]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:59:35.572544 ignition[1105]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 00:59:35.572544 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 00:59:35.572544 ignition[1105]: INFO : umount: umount passed Sep 6 00:59:35.572544 ignition[1105]: INFO : POST message to Packet Timeline Sep 6 00:59:35.572544 ignition[1105]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 6 00:59:35.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:35.445672 systemd[1]: Stopped target sockets.target. Sep 6 00:59:35.459662 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:59:35.459732 systemd[1]: Closed iscsid.socket. Sep 6 00:59:35.478728 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:59:35.478852 systemd[1]: Closed iscsiuio.socket. Sep 6 00:59:35.496913 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:59:35.497224 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:59:35.514939 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:59:35.515237 systemd[1]: Stopped ignition-files.service. Sep 6 00:59:35.530941 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 00:59:35.531248 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 00:59:35.551637 systemd[1]: Stopping ignition-mount.service... Sep 6 00:59:35.566077 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:59:35.579588 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:59:35.579688 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:59:35.600712 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:59:35.600835 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:59:35.643734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:59:35.645332 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:59:35.645552 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:59:35.657772 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:59:35.657976 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:59:36.551440 ignition[1105]: INFO : GET result: OK Sep 6 00:59:36.988909 ignition[1105]: INFO : Ignition finished successfully Sep 6 00:59:36.991595 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:59:37.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:36.991805 systemd[1]: Stopped ignition-mount.service. Sep 6 00:59:37.006939 systemd[1]: Stopped target network.target. Sep 6 00:59:37.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.022620 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:59:37.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.022835 systemd[1]: Stopped ignition-disks.service. Sep 6 00:59:37.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.038740 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:59:37.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.038861 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:59:37.054730 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:59:37.054862 systemd[1]: Stopped ignition-setup.service. Sep 6 00:59:37.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.070732 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:59:37.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.147000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:59:37.070859 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:59:37.085999 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:59:37.092563 systemd-networkd[882]: enp2s0f1np1: DHCPv6 lease lost Sep 6 00:59:37.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.100865 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:59:37.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.101635 systemd-networkd[882]: enp2s0f0np0: DHCPv6 lease lost Sep 6 00:59:37.217000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:59:37.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.115210 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:59:37.115446 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:59:37.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.132030 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:59:37.132264 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:59:37.146066 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:59:37.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.146153 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:59:37.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.165109 systemd[1]: Stopping network-cleanup.service... Sep 6 00:59:37.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.177610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:59:37.177748 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:59:37.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.193781 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:59:37.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.193922 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:59:37.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.209970 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:59:37.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.210091 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:59:37.225903 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:59:37.244873 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:59:37.246261 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:59:37.246576 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:59:37.261015 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:59:37.261142 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:59:37.273719 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:59:37.273814 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:59:37.289686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:59:37.289806 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:59:37.304779 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:59:37.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:37.304898 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:59:37.320734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:59:37.320854 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:59:37.571000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:59:37.571000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:59:37.571000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:59:37.572000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:59:37.572000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:59:37.337319 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:59:37.350498 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:59:37.350529 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 00:59:37.628534 iscsid[902]: iscsid shutting down. Sep 6 00:59:37.368014 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:59:37.368137 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:59:37.383706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:59:37.383822 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:59:37.401988 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 00:59:37.403266 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:59:37.403485 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:59:37.510951 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:59:37.511172 systemd[1]: Stopped network-cleanup.service. Sep 6 00:59:37.525979 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:59:37.544220 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:59:37.562084 systemd[1]: Switching root. Sep 6 00:59:37.628817 systemd-journald[269]: Journal stopped Sep 6 00:59:41.511652 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Sep 6 00:59:41.511668 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:59:41.511676 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:59:41.511681 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:59:41.511687 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:59:41.511692 kernel: SELinux: policy capability open_perms=1 Sep 6 00:59:41.511698 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:59:41.511704 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:59:41.511710 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:59:41.511715 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:59:41.511721 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:59:41.511726 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:59:41.511731 systemd[1]: Successfully loaded SELinux policy in 319.969ms. Sep 6 00:59:41.511738 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.597ms. Sep 6 00:59:41.511747 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:59:41.511753 systemd[1]: Detected architecture x86-64. Sep 6 00:59:41.511759 systemd[1]: Detected first boot. Sep 6 00:59:41.511765 systemd[1]: Hostname set to . Sep 6 00:59:41.511772 systemd[1]: Initializing machine ID from random generator. Sep 6 00:59:41.511779 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:59:41.511785 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:59:41.511791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:59:41.511797 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:59:41.511804 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:59:41.511810 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:59:41.511817 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Sep 6 00:59:41.511824 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:59:41.511831 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:59:41.511837 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:59:41.511843 systemd[1]: Created slice system-getty.slice. Sep 6 00:59:41.511849 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:59:41.511856 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:59:41.511862 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:59:41.511869 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:59:41.511875 systemd[1]: Created slice user.slice. Sep 6 00:59:41.511882 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:59:41.511888 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:59:41.511894 systemd[1]: Set up automount boot.automount. Sep 6 00:59:41.511900 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:59:41.511906 systemd[1]: Reached target integritysetup.target. Sep 6 00:59:41.511914 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:59:41.511921 systemd[1]: Reached target remote-fs.target. Sep 6 00:59:41.511927 systemd[1]: Reached target slices.target. Sep 6 00:59:41.511934 systemd[1]: Reached target swap.target. Sep 6 00:59:41.511941 systemd[1]: Reached target torcx.target. Sep 6 00:59:41.511947 systemd[1]: Reached target veritysetup.target. Sep 6 00:59:41.511954 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:59:41.511960 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:59:41.511967 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:59:41.511974 kernel: kauditd_printk_skb: 48 callbacks suppressed Sep 6 00:59:41.511981 kernel: audit: type=1400 audit(1757120380.779:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:59:41.511987 kernel: audit: type=1335 audit(1757120380.779:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 00:59:41.511993 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:59:41.512000 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:59:41.512006 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:59:41.512012 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:59:41.512020 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:59:41.512027 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:59:41.512033 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:59:41.512040 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:59:41.512047 systemd[1]: Mounting media.mount... Sep 6 00:59:41.512053 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:59:41.512061 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:59:41.512067 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:59:41.512074 systemd[1]: Mounting tmp.mount... Sep 6 00:59:41.512080 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:59:41.512087 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:59:41.512094 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:59:41.512100 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:59:41.512107 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:59:41.512113 systemd[1]: Starting modprobe@drm.service... Sep 6 00:59:41.512121 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:59:41.512128 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:59:41.512134 kernel: fuse: init (API version 7.34) Sep 6 00:59:41.512140 systemd[1]: Starting modprobe@loop.service... Sep 6 00:59:41.512146 kernel: loop: module loaded Sep 6 00:59:41.512152 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:59:41.512159 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:59:41.512166 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:59:41.512172 systemd[1]: Starting systemd-journald.service... Sep 6 00:59:41.512180 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:59:41.512187 kernel: audit: type=1305 audit(1757120381.509:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:59:41.512195 systemd-journald[1298]: Journal started Sep 6 00:59:41.512220 systemd-journald[1298]: Runtime Journal (/run/log/journal/3054fa2de9304d18a7fce26fd20c6eb2) is 8.0M, max 639.3M, 631.3M free. Sep 6 00:59:40.779000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:59:40.779000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 00:59:41.509000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:59:41.509000 audit[1298]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff74ba09d0 a2=4000 a3=7fff74ba0a6c items=0 ppid=1 pid=1298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:59:41.509000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:59:41.558483 kernel: audit: type=1300 audit(1757120381.509:93): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff74ba09d0 a2=4000 a3=7fff74ba0a6c items=0 ppid=1 pid=1298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:59:41.558516 kernel: audit: type=1327 audit(1757120381.509:93): proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:59:41.672595 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:59:41.698595 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:59:41.724475 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:59:41.768471 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:59:41.787596 systemd[1]: Started systemd-journald.service. Sep 6 00:59:41.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:41.797159 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:59:41.844476 kernel: audit: type=1130 audit(1757120381.796:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:41.851657 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:59:41.858654 systemd[1]: Mounted media.mount. Sep 6 00:59:41.865650 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:59:41.873653 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:59:41.881617 systemd[1]: Mounted tmp.mount. Sep 6 00:59:41.888740 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:59:41.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:41.896755 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:59:41.944463 kernel: audit: type=1130 audit(1757120381.896:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:41.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:41.952725 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:59:41.952805 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:59:42.001549 kernel: audit: type=1130 audit(1757120381.952:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.009733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:59:42.009810 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:59:42.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.060474 kernel: audit: type=1130 audit(1757120382.009:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.060510 kernel: audit: type=1131 audit(1757120382.009:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.119737 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:59:42.119812 systemd[1]: Finished modprobe@drm.service. Sep 6 00:59:42.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.128759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:59:42.128834 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:59:42.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.137739 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:59:42.137812 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:59:42.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.146749 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:59:42.146834 systemd[1]: Finished modprobe@loop.service. Sep 6 00:59:42.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.155775 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:59:42.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.163749 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:59:42.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.171771 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:59:42.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.179780 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:59:42.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.187919 systemd[1]: Reached target network-pre.target. Sep 6 00:59:42.198083 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:59:42.207798 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:59:42.214637 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:59:42.215724 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:59:42.224118 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:59:42.227515 systemd-journald[1298]: Time spent on flushing to /var/log/journal/3054fa2de9304d18a7fce26fd20c6eb2 is 14.636ms for 1549 entries. Sep 6 00:59:42.227515 systemd-journald[1298]: System Journal (/var/log/journal/3054fa2de9304d18a7fce26fd20c6eb2) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:59:42.268874 systemd-journald[1298]: Received client request to flush runtime journal. Sep 6 00:59:42.240565 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:59:42.241140 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:59:42.258558 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:59:42.259134 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:59:42.266104 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:59:42.273214 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:59:42.280743 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:59:42.288609 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:59:42.296717 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:59:42.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.304703 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:59:42.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.312685 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:59:42.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.320641 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:59:42.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.330637 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:59:42.339250 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:59:42.348632 udevadm[1325]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:59:42.356243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:59:42.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.524001 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:59:42.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.533340 systemd[1]: Starting systemd-udevd.service... Sep 6 00:59:42.544680 systemd-udevd[1332]: Using default interface naming scheme 'v252'. Sep 6 00:59:42.560608 systemd[1]: Started systemd-udevd.service. Sep 6 00:59:42.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:42.572268 systemd[1]: Found device dev-ttyS1.device. Sep 6 00:59:42.617118 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 6 00:59:42.617196 kernel: ACPI: button: Sleep Button [SLPB] Sep 6 00:59:42.617460 systemd[1]: Starting systemd-networkd.service... Sep 6 00:59:42.640207 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 6 00:59:42.640270 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:59:42.665432 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:59:42.703432 kernel: IPMI message handler: version 39.2 Sep 6 00:59:42.659000 audit[1393]: AVC avc: denied { confidentiality } for pid=1393 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:59:42.659000 audit[1393]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564abf3c5200 a1=4d9cc a2=7ffb49280bc5 a3=5 items=42 ppid=1332 pid=1393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:59:42.743516 kernel: ipmi device interface Sep 6 00:59:42.659000 audit: CWD cwd="/" Sep 6 00:59:42.659000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=1 name=(null) inode=8259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=2 name=(null) inode=8259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=3 name=(null) inode=8260 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=4 name=(null) inode=8259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=5 name=(null) inode=8261 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=6 name=(null) inode=8259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=7 name=(null) inode=8262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=8 name=(null) inode=8262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=9 name=(null) inode=8263 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=10 name=(null) inode=8262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=11 name=(null) inode=8264 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=12 name=(null) inode=8262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=13 name=(null) inode=8265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=14 name=(null) inode=8262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=15 name=(null) inode=8266 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=16 name=(null) inode=8262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=17 name=(null) inode=8267 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=18 name=(null) inode=8259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=19 name=(null) inode=8268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=20 name=(null) inode=8268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=21 name=(null) inode=8269 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=22 name=(null) inode=8268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=23 name=(null) inode=8270 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=24 name=(null) inode=8268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=25 name=(null) inode=8271 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=26 name=(null) inode=8268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=27 name=(null) inode=8272 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=28 name=(null) inode=8268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=29 name=(null) inode=8273 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=30 name=(null) inode=8259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=31 name=(null) inode=8274 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=32 name=(null) inode=8274 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=33 name=(null) inode=8275 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=34 name=(null) inode=8274 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=35 name=(null) inode=8276 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=36 name=(null) inode=8274 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=37 name=(null) inode=8277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=38 name=(null) inode=8274 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=39 name=(null) inode=8278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=40 name=(null) inode=8274 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PATH item=41 name=(null) inode=8279 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:59:42.659000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:59:42.756164 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:59:42.781428 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 6 00:59:42.827492 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 6 00:59:42.827777 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 6 00:59:42.781705 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:59:42.833424 kernel: ipmi_si: IPMI System Interface driver Sep 6 00:59:42.833466 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 6 00:59:42.875540 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 6 00:59:42.875622 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 6 00:59:42.985495 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 6 00:59:42.985516 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 6 00:59:42.985533 kernel: iTCO_vendor_support: vendor-support=0 Sep 6 00:59:42.985561 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 6 00:59:43.102179 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 6 00:59:43.102301 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Sep 6 00:59:43.102401 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 6 00:59:43.102480 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 6 00:59:43.102494 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 6 00:59:43.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:43.034205 systemd[1]: Started systemd-userdbd.service. Sep 6 00:59:43.203113 kernel: intel_rapl_common: Found RAPL domain package Sep 6 00:59:43.203155 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 6 00:59:43.203241 kernel: intel_rapl_common: Found RAPL domain core Sep 6 00:59:43.223425 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Sep 6 00:59:43.223547 kernel: intel_rapl_common: Found RAPL domain uncore Sep 6 00:59:43.223566 kernel: intel_rapl_common: Found RAPL domain dram Sep 6 00:59:43.329838 systemd-networkd[1387]: bond0: netdev ready Sep 6 00:59:43.332689 systemd-networkd[1387]: lo: Link UP Sep 6 00:59:43.332691 systemd-networkd[1387]: lo: Gained carrier Sep 6 00:59:43.333214 systemd-networkd[1387]: Enumeration completed Sep 6 00:59:43.333310 systemd[1]: Started systemd-networkd.service. Sep 6 00:59:43.333545 systemd-networkd[1387]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 6 00:59:43.339292 systemd-networkd[1387]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:8f:96:a7.network. Sep 6 00:59:43.353423 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 6 00:59:43.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:43.375461 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 6 00:59:43.378642 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:59:43.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:43.387257 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:59:43.403252 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:59:43.429961 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:59:43.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:43.438682 systemd[1]: Reached target cryptsetup.target. Sep 6 00:59:43.447723 systemd[1]: Starting lvm2-activation.service... Sep 6 00:59:43.452661 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:59:43.486406 systemd[1]: Finished lvm2-activation.service. Sep 6 00:59:43.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:43.494767 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:59:43.502558 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:59:43.502594 systemd[1]: Reached target local-fs.target. Sep 6 00:59:43.510559 systemd[1]: Reached target machines.target. Sep 6 00:59:43.520755 systemd[1]: Starting ldconfig.service... Sep 6 00:59:43.528731 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:59:43.528880 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:59:43.531834 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:59:43.540903 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:59:43.554692 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:59:43.557996 systemd[1]: Starting systemd-sysext.service... Sep 6 00:59:43.559043 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1443 (bootctl) Sep 6 00:59:43.562470 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:59:43.578080 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:59:43.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:43.592623 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:59:43.599819 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:59:43.600253 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:59:43.656464 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 00:59:43.726486 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 6 00:59:43.753433 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Sep 6 00:59:43.753551 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 6 00:59:43.797423 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:59:43.797461 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Sep 6 00:59:43.805932 systemd-fsck[1458]: fsck.fat 4.2 (2021-01-31) Sep 6 00:59:43.805932 systemd-fsck[1458]: /dev/sdb1: 790 files, 120761/258078 clusters Sep 6 00:59:43.806861 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:59:43.817118 systemd-networkd[1387]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:8f:96:a6.network. Sep 6 00:59:43.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:43.828821 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:59:43.830113 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:59:43.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:43.846178 systemd[1]: Mounting boot.mount... Sep 6 00:59:43.877473 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 00:59:43.880190 systemd[1]: Mounted boot.mount. Sep 6 00:59:43.890800 (sd-sysext)[1463]: Using extensions 'kubernetes'. Sep 6 00:59:43.891092 (sd-sysext)[1463]: Merged extensions into '/usr'. Sep 6 00:59:43.902473 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 6 00:59:43.918909 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:59:43.919809 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:59:43.926671 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:59:43.927487 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:59:43.937176 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:59:43.945105 systemd[1]: Starting modprobe@loop.service... Sep 6 00:59:43.952497 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:59:43.952573 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:59:43.952649 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:59:43.954836 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:59:43.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:43.968672 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:59:43.974463 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 6 00:59:43.991108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:59:43.991354 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:59:43.999432 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Sep 6 00:59:44.000841 systemd-networkd[1387]: bond0: Link UP Sep 6 00:59:44.001077 systemd-networkd[1387]: enp2s0f1np1: Link UP Sep 6 00:59:44.001234 systemd-networkd[1387]: enp2s0f1np1: Gained carrier Sep 6 00:59:44.002372 systemd-networkd[1387]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:8f:96:a6.network. Sep 6 00:59:44.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:44.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:44.017805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:59:44.017922 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:59:44.024435 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 6 00:59:44.024508 kernel: bond0: active interface up! Sep 6 00:59:44.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:44.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:44.058764 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:59:44.058869 systemd[1]: Finished modprobe@loop.service. Sep 6 00:59:44.060423 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 6 00:59:44.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:44.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:44.069776 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:59:44.069842 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:59:44.070353 systemd[1]: Finished systemd-sysext.service. Sep 6 00:59:44.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:44.079331 systemd[1]: Starting ensure-sysext.service... Sep 6 00:59:44.086036 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:59:44.091848 systemd-tmpfiles[1479]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:59:44.093019 systemd-tmpfiles[1479]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:59:44.094527 systemd-tmpfiles[1479]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:59:44.096048 ldconfig[1442]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:59:44.096679 systemd[1]: Reloading. Sep 6 00:59:44.113476 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2025-09-06T00:59:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:59:44.113492 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2025-09-06T00:59:44Z" level=info msg="torcx already run" Sep 6 00:59:44.143378 systemd-networkd[1387]: bond0: Gained carrier Sep 6 00:59:44.143548 systemd-networkd[1387]: enp2s0f0np0: Link UP Sep 6 00:59:44.143698 systemd-networkd[1387]: enp2s0f0np0: Gained carrier Sep 6 00:59:44.157780 systemd-networkd[1387]: enp2s0f1np1: Link DOWN Sep 6 00:59:44.157784 systemd-networkd[1387]: enp2s0f1np1: Lost carrier Sep 6 00:59:44.170759 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:59:44.170766 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:59:44.183137 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:59:44.204322 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 00:59:44.204351 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Sep 6 00:59:44.224684 systemd[1]: Finished ldconfig.service. Sep 6 00:59:44.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:44.232160 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:59:44.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:59:44.242828 systemd[1]: Starting audit-rules.service... Sep 6 00:59:44.250167 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:59:44.257000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:59:44.257000 audit[1585]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd15a13d90 a2=420 a3=0 items=0 ppid=1568 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:59:44.257000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:59:44.258249 augenrules[1585]: No rules Sep 6 00:59:44.259264 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:59:44.269365 systemd[1]: Starting systemd-resolved.service... Sep 6 00:59:44.277367 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:59:44.285152 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:59:44.292302 systemd[1]: Finished audit-rules.service. Sep 6 00:59:44.300875 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:59:44.309887 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:59:44.326187 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:59:44.338078 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:59:44.340246 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:59:44.349661 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:59:44.358725 systemd[1]: Starting modprobe@loop.service... Sep 6 00:59:44.370529 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:59:44.370702 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:59:44.372414 systemd[1]: Starting systemd-update-done.service... Sep 6 00:59:44.377425 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 6 00:59:44.389467 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:59:44.390301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:59:44.390429 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:59:44.397423 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Sep 6 00:59:44.398026 systemd-networkd[1387]: enp2s0f1np1: Link UP Sep 6 00:59:44.398029 systemd-networkd[1387]: enp2s0f1np1: Gained carrier Sep 6 00:59:44.404794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:59:44.404919 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:59:44.408132 systemd-resolved[1592]: Positive Trust Anchors: Sep 6 00:59:44.408141 systemd-resolved[1592]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:59:44.408160 systemd-resolved[1592]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:59:44.412333 systemd-resolved[1592]: Using system hostname 'ci-3510.3.8-n-4cc2a8c2f2'. Sep 6 00:59:44.413661 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:59:44.428771 systemd[1]: Started systemd-resolved.service. Sep 6 00:59:44.434483 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Sep 6 00:59:44.450732 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:59:44.450820 systemd[1]: Finished modprobe@loop.service. Sep 6 00:59:44.455486 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 6 00:59:44.463787 systemd[1]: Finished systemd-update-done.service. Sep 6 00:59:44.471699 systemd[1]: Reached target network.target. Sep 6 00:59:44.479547 systemd[1]: Reached target nss-lookup.target. Sep 6 00:59:44.487549 systemd[1]: Reached target time-set.target. Sep 6 00:59:44.495555 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:59:44.495649 systemd[1]: Reached target sysinit.target. Sep 6 00:59:44.503642 systemd[1]: Started motdgen.path. Sep 6 00:59:44.510633 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:59:44.520809 systemd[1]: Started logrotate.timer. Sep 6 00:59:44.527795 systemd[1]: Started mdadm.timer. Sep 6 00:59:44.534742 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:59:44.542795 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:59:44.543080 systemd[1]: Reached target paths.target. Sep 6 00:59:44.549903 systemd[1]: Reached target timers.target. Sep 6 00:59:44.557632 systemd[1]: Listening on dbus.socket. Sep 6 00:59:44.567813 systemd[1]: Starting docker.socket... Sep 6 00:59:44.575324 systemd[1]: Listening on sshd.socket. Sep 6 00:59:44.582602 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:59:44.582677 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:59:44.583548 systemd[1]: Listening on docker.socket. Sep 6 00:59:44.591699 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:59:44.591782 systemd[1]: Reached target sockets.target. Sep 6 00:59:44.599592 systemd[1]: Reached target basic.target. Sep 6 00:59:44.606689 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:59:44.606740 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:59:44.606846 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:59:44.607989 systemd[1]: Starting containerd.service... Sep 6 00:59:44.615676 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:59:44.625302 systemd[1]: Starting coreos-metadata.service... Sep 6 00:59:44.634782 systemd[1]: Starting dbus.service... Sep 6 00:59:44.642745 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:59:44.651443 systemd[1]: Starting extend-filesystems.service... Sep 6 00:59:44.651684 jq[1619]: false Sep 6 00:59:44.659519 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:59:44.660897 systemd[1]: Starting modprobe@drm.service... Sep 6 00:59:44.665346 dbus-daemon[1618]: [system] SELinux support is enabled Sep 6 00:59:44.665578 extend-filesystems[1620]: Found loop1 Sep 6 00:59:44.675588 extend-filesystems[1620]: Found sda Sep 6 00:59:44.675588 extend-filesystems[1620]: Found sdb Sep 6 00:59:44.675588 extend-filesystems[1620]: Found sdb1 Sep 6 00:59:44.675588 extend-filesystems[1620]: Found sdb2 Sep 6 00:59:44.675588 extend-filesystems[1620]: Found sdb3 Sep 6 00:59:44.675588 extend-filesystems[1620]: Found usr Sep 6 00:59:44.675588 extend-filesystems[1620]: Found sdb4 Sep 6 00:59:44.675588 extend-filesystems[1620]: Found sdb6 Sep 6 00:59:44.675588 extend-filesystems[1620]: Found sdb7 Sep 6 00:59:44.675588 extend-filesystems[1620]: Found sdb9 Sep 6 00:59:44.675588 extend-filesystems[1620]: Checking size of /dev/sdb9 Sep 6 00:59:44.675588 extend-filesystems[1620]: Resized partition /dev/sdb9 Sep 6 00:59:44.816470 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Sep 6 00:59:44.669750 systemd[1]: Starting motdgen.service... Sep 6 00:59:44.816562 extend-filesystems[1631]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:59:44.833461 coreos-metadata[1612]: Sep 06 00:59:44.680 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 6 00:59:44.833590 coreos-metadata[1615]: Sep 06 00:59:44.685 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 6 00:59:44.698038 systemd[1]: Starting prepare-helm.service... Sep 6 00:59:44.709278 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:59:44.724263 systemd[1]: Starting sshd-keygen.service... Sep 6 00:59:44.738262 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:59:44.756506 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:59:44.757258 systemd[1]: Starting tcsd.service... Sep 6 00:59:44.834173 update_engine[1655]: I0906 00:59:44.830806 1655 main.cc:92] Flatcar Update Engine starting Sep 6 00:59:44.782142 systemd[1]: Starting update-engine.service... Sep 6 00:59:44.834330 jq[1656]: true Sep 6 00:59:44.808174 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:59:44.834476 update_engine[1655]: I0906 00:59:44.834321 1655 update_check_scheduler.cc:74] Next update check in 10m34s Sep 6 00:59:44.827727 systemd[1]: Started dbus.service. Sep 6 00:59:44.842390 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:59:44.842516 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:59:44.842779 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:59:44.842868 systemd[1]: Finished modprobe@drm.service. Sep 6 00:59:44.850762 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:59:44.850884 systemd[1]: Finished motdgen.service. Sep 6 00:59:44.858127 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:59:44.858251 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:59:44.869216 jq[1664]: true Sep 6 00:59:44.869859 systemd[1]: Finished ensure-sysext.service. Sep 6 00:59:44.878127 env[1665]: time="2025-09-06T00:59:44.878075608Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:59:44.882591 tar[1661]: linux-amd64/helm Sep 6 00:59:44.884994 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 6 00:59:44.885131 systemd[1]: Condition check resulted in tcsd.service being skipped. Sep 6 00:59:44.886657 env[1665]: time="2025-09-06T00:59:44.886635273Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:59:44.886748 env[1665]: time="2025-09-06T00:59:44.886713162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:59:44.887303 env[1665]: time="2025-09-06T00:59:44.887265679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:59:44.887303 env[1665]: time="2025-09-06T00:59:44.887279293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:59:44.887419 env[1665]: time="2025-09-06T00:59:44.887407656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:59:44.887449 env[1665]: time="2025-09-06T00:59:44.887424560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:59:44.887449 env[1665]: time="2025-09-06T00:59:44.887433028Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:59:44.887449 env[1665]: time="2025-09-06T00:59:44.887439131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:59:44.887496 env[1665]: time="2025-09-06T00:59:44.887479171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:59:44.887614 env[1665]: time="2025-09-06T00:59:44.887605079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:59:44.887696 env[1665]: time="2025-09-06T00:59:44.887686520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:59:44.887716 env[1665]: time="2025-09-06T00:59:44.887696041Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:59:44.887733 env[1665]: time="2025-09-06T00:59:44.887721448Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:59:44.887733 env[1665]: time="2025-09-06T00:59:44.887729542Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:59:44.888668 systemd[1]: Started update-engine.service. Sep 6 00:59:44.896715 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:59:44.897596 systemd[1]: Started locksmithd.service. Sep 6 00:59:44.904541 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:59:44.904566 systemd[1]: Reached target system-config.target. Sep 6 00:59:44.907211 env[1665]: time="2025-09-06T00:59:44.907196509Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:59:44.907239 env[1665]: time="2025-09-06T00:59:44.907217473Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:59:44.907239 env[1665]: time="2025-09-06T00:59:44.907226017Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:59:44.907270 env[1665]: time="2025-09-06T00:59:44.907242894Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:59:44.907270 env[1665]: time="2025-09-06T00:59:44.907251143Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:59:44.907270 env[1665]: time="2025-09-06T00:59:44.907259682Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:59:44.907270 env[1665]: time="2025-09-06T00:59:44.907266859Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:59:44.910357 env[1665]: time="2025-09-06T00:59:44.907274573Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:59:44.910357 env[1665]: time="2025-09-06T00:59:44.907281951Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:59:44.910357 env[1665]: time="2025-09-06T00:59:44.907289130Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:59:44.910357 env[1665]: time="2025-09-06T00:59:44.907296802Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:59:44.910357 env[1665]: time="2025-09-06T00:59:44.907304766Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:59:44.910357 env[1665]: time="2025-09-06T00:59:44.910290223Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:59:44.910357 env[1665]: time="2025-09-06T00:59:44.910343700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:59:44.910573 env[1665]: time="2025-09-06T00:59:44.910557836Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:59:44.910607 env[1665]: time="2025-09-06T00:59:44.910581388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910607 env[1665]: time="2025-09-06T00:59:44.910594205Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:59:44.910647 env[1665]: time="2025-09-06T00:59:44.910625215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910647 env[1665]: time="2025-09-06T00:59:44.910637405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910647 env[1665]: time="2025-09-06T00:59:44.910645071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910711 env[1665]: time="2025-09-06T00:59:44.910651760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910711 env[1665]: time="2025-09-06T00:59:44.910658589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910711 env[1665]: time="2025-09-06T00:59:44.910665054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910711 env[1665]: time="2025-09-06T00:59:44.910672578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910711 env[1665]: time="2025-09-06T00:59:44.910681149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910711 env[1665]: time="2025-09-06T00:59:44.910702833Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:59:44.910803 env[1665]: time="2025-09-06T00:59:44.910786047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910803 env[1665]: time="2025-09-06T00:59:44.910795710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910850 env[1665]: time="2025-09-06T00:59:44.910802625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.910850 env[1665]: time="2025-09-06T00:59:44.910809467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:59:44.910850 env[1665]: time="2025-09-06T00:59:44.910817857Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:59:44.911044 env[1665]: time="2025-09-06T00:59:44.911021466Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:59:44.911127 env[1665]: time="2025-09-06T00:59:44.911060036Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:59:44.911223 env[1665]: time="2025-09-06T00:59:44.911210040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:59:44.911389 env[1665]: time="2025-09-06T00:59:44.911364088Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:59:44.912984 env[1665]: time="2025-09-06T00:59:44.911397086Z" level=info msg="Connect containerd service" Sep 6 00:59:44.912984 env[1665]: time="2025-09-06T00:59:44.911415461Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:59:44.913666 systemd[1]: Starting systemd-logind.service... Sep 6 00:59:44.913941 env[1665]: time="2025-09-06T00:59:44.913930066Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:59:44.914072 env[1665]: time="2025-09-06T00:59:44.914034023Z" level=info msg="Start subscribing containerd event" Sep 6 00:59:44.914101 env[1665]: time="2025-09-06T00:59:44.914068540Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:59:44.914101 env[1665]: time="2025-09-06T00:59:44.914087836Z" level=info msg="Start recovering state" Sep 6 00:59:44.914101 env[1665]: time="2025-09-06T00:59:44.914095394Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:59:44.914160 env[1665]: time="2025-09-06T00:59:44.914119523Z" level=info msg="containerd successfully booted in 0.036438s" Sep 6 00:59:44.914160 env[1665]: time="2025-09-06T00:59:44.914137392Z" level=info msg="Start event monitor" Sep 6 00:59:44.914160 env[1665]: time="2025-09-06T00:59:44.914153518Z" level=info msg="Start snapshots syncer" Sep 6 00:59:44.914215 env[1665]: time="2025-09-06T00:59:44.914162023Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:59:44.914215 env[1665]: time="2025-09-06T00:59:44.914168205Z" level=info msg="Start streaming server" Sep 6 00:59:44.919883 bash[1700]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:59:44.920496 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:59:44.920520 systemd[1]: Reached target user-config.target. Sep 6 00:59:44.928477 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:59:44.928709 systemd[1]: Started containerd.service. Sep 6 00:59:44.935642 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:59:44.939909 systemd-logind[1704]: Watching system buttons on /dev/input/event3 (Power Button) Sep 6 00:59:44.939921 systemd-logind[1704]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 6 00:59:44.939930 systemd-logind[1704]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 6 00:59:44.940043 systemd-logind[1704]: New seat seat0. Sep 6 00:59:44.945742 systemd[1]: Started systemd-logind.service. Sep 6 00:59:44.962637 locksmithd[1702]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:59:44.997267 sshd_keygen[1652]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:59:45.009463 systemd[1]: Finished sshd-keygen.service. Sep 6 00:59:45.017526 systemd[1]: Starting issuegen.service... Sep 6 00:59:45.024742 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:59:45.024864 systemd[1]: Finished issuegen.service. Sep 6 00:59:45.032463 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:59:45.040719 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:59:45.049371 systemd[1]: Started getty@tty1.service. Sep 6 00:59:45.057271 systemd[1]: Started serial-getty@ttyS1.service. Sep 6 00:59:45.065648 systemd[1]: Reached target getty.target. Sep 6 00:59:45.137505 tar[1661]: linux-amd64/LICENSE Sep 6 00:59:45.137593 tar[1661]: linux-amd64/README.md Sep 6 00:59:45.140158 systemd[1]: Finished prepare-helm.service. Sep 6 00:59:45.206470 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Sep 6 00:59:45.234651 extend-filesystems[1631]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Sep 6 00:59:45.234651 extend-filesystems[1631]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 6 00:59:45.234651 extend-filesystems[1631]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Sep 6 00:59:45.272532 extend-filesystems[1620]: Resized filesystem in /dev/sdb9 Sep 6 00:59:45.235045 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:59:45.235170 systemd[1]: Finished extend-filesystems.service. Sep 6 00:59:45.606614 systemd-networkd[1387]: bond0: Gained IPv6LL Sep 6 00:59:45.674995 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:59:45.684765 systemd[1]: Reached target network-online.target. Sep 6 00:59:45.693620 systemd[1]: Starting kubelet.service... Sep 6 00:59:46.459654 systemd[1]: Started kubelet.service. Sep 6 00:59:46.972233 kubelet[1748]: E0906 00:59:46.972182 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:59:46.973256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:59:46.973341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:59:47.007613 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Sep 6 00:59:50.078725 login[1733]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 00:59:50.085325 login[1732]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 00:59:50.087171 systemd-logind[1704]: New session 1 of user core. Sep 6 00:59:50.087699 systemd[1]: Created slice user-500.slice. Sep 6 00:59:50.088269 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:59:50.089515 systemd-logind[1704]: New session 2 of user core. Sep 6 00:59:50.094160 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:59:50.094849 systemd[1]: Starting user@500.service... Sep 6 00:59:50.096911 (systemd)[1772]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:59:50.172399 systemd[1772]: Queued start job for default target default.target. Sep 6 00:59:50.172509 systemd[1772]: Reached target paths.target. Sep 6 00:59:50.172520 systemd[1772]: Reached target sockets.target. Sep 6 00:59:50.172527 systemd[1772]: Reached target timers.target. Sep 6 00:59:50.172534 systemd[1772]: Reached target basic.target. Sep 6 00:59:50.172554 systemd[1772]: Reached target default.target. Sep 6 00:59:50.172568 systemd[1772]: Startup finished in 72ms. Sep 6 00:59:50.172613 systemd[1]: Started user@500.service. Sep 6 00:59:50.173226 systemd[1]: Started session-1.scope. Sep 6 00:59:50.173609 systemd[1]: Started session-2.scope. Sep 6 00:59:50.535451 coreos-metadata[1612]: Sep 06 00:59:50.535 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Sep 6 00:59:50.536207 coreos-metadata[1615]: Sep 06 00:59:50.535 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Sep 6 00:59:51.535563 coreos-metadata[1612]: Sep 06 00:59:51.535 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 6 00:59:51.536530 coreos-metadata[1615]: Sep 06 00:59:51.535 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 6 00:59:52.077455 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Sep 6 00:59:52.084465 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Sep 6 00:59:52.588553 systemd[1]: Created slice system-sshd.slice. Sep 6 00:59:52.589319 systemd[1]: Started sshd@0-139.178.90.135:22-139.178.68.195:47178.service. Sep 6 00:59:52.635757 sshd[1795]: Accepted publickey for core from 139.178.68.195 port 47178 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 00:59:52.638717 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:59:52.648592 systemd-logind[1704]: New session 3 of user core. Sep 6 00:59:52.651050 systemd[1]: Started session-3.scope. Sep 6 00:59:52.710576 systemd[1]: Started sshd@1-139.178.90.135:22-139.178.68.195:47184.service. Sep 6 00:59:52.739617 sshd[1800]: Accepted publickey for core from 139.178.68.195 port 47184 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 00:59:52.740344 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:59:52.742864 systemd-logind[1704]: New session 4 of user core. Sep 6 00:59:52.743286 systemd[1]: Started session-4.scope. Sep 6 00:59:52.805113 sshd[1800]: pam_unix(sshd:session): session closed for user core Sep 6 00:59:52.811647 systemd[1]: Started sshd@2-139.178.90.135:22-139.178.68.195:47186.service. Sep 6 00:59:52.813403 systemd[1]: sshd@1-139.178.90.135:22-139.178.68.195:47184.service: Deactivated successfully. Sep 6 00:59:52.815560 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:59:52.815605 systemd-logind[1704]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:59:52.816149 systemd-logind[1704]: Removed session 4. Sep 6 00:59:52.844449 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 47186 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 00:59:52.845180 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:59:52.848037 systemd-logind[1704]: New session 5 of user core. Sep 6 00:59:52.848566 systemd[1]: Started session-5.scope. Sep 6 00:59:52.911379 sshd[1806]: pam_unix(sshd:session): session closed for user core Sep 6 00:59:52.916495 systemd[1]: sshd@2-139.178.90.135:22-139.178.68.195:47186.service: Deactivated successfully. Sep 6 00:59:52.918845 systemd-logind[1704]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:59:52.918878 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:59:52.921020 systemd-logind[1704]: Removed session 5. Sep 6 00:59:52.981719 systemd-timesyncd[1594]: Contacted time server 166.88.142.52:123 (0.flatcar.pool.ntp.org). Sep 6 00:59:52.981886 systemd-timesyncd[1594]: Initial clock synchronization to Sat 2025-09-06 00:59:52.990664 UTC. Sep 6 00:59:53.598823 coreos-metadata[1615]: Sep 06 00:59:53.598 INFO Fetch successful Sep 6 00:59:53.635258 systemd[1]: Finished coreos-metadata.service. Sep 6 00:59:53.636137 systemd[1]: Started packet-phone-home.service. Sep 6 00:59:53.641686 curl[1819]: % Total % Received % Xferd Average Speed Time Time Time Current Sep 6 00:59:53.641866 curl[1819]: Dload Upload Total Spent Left Speed Sep 6 00:59:53.646120 coreos-metadata[1612]: Sep 06 00:59:53.646 INFO Fetch successful Sep 6 00:59:53.681203 unknown[1612]: wrote ssh authorized keys file for user: core Sep 6 00:59:53.693478 update-ssh-keys[1821]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:59:53.693742 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:59:53.693920 systemd[1]: Reached target multi-user.target. Sep 6 00:59:53.694655 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:59:53.698420 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:59:53.698572 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:59:53.698682 systemd[1]: Startup finished in 28.033s (kernel) + 15.863s (userspace) = 43.897s. Sep 6 00:59:54.425402 curl[1819]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Sep 6 00:59:54.427401 systemd[1]: packet-phone-home.service: Deactivated successfully. Sep 6 00:59:57.162788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:59:57.163254 systemd[1]: Stopped kubelet.service. Sep 6 00:59:57.166507 systemd[1]: Starting kubelet.service... Sep 6 00:59:57.439288 systemd[1]: Started kubelet.service. Sep 6 00:59:57.463396 kubelet[1836]: E0906 00:59:57.463372 1836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:59:57.465309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:59:57.465416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:00:02.922015 systemd[1]: Started sshd@3-139.178.90.135:22-139.178.68.195:58940.service. Sep 6 01:00:02.952900 sshd[1853]: Accepted publickey for core from 139.178.68.195 port 58940 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:00:02.953575 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:00:02.955917 systemd-logind[1704]: New session 6 of user core. Sep 6 01:00:02.956286 systemd[1]: Started session-6.scope. Sep 6 01:00:03.005959 sshd[1853]: pam_unix(sshd:session): session closed for user core Sep 6 01:00:03.007354 systemd[1]: Started sshd@4-139.178.90.135:22-139.178.68.195:58948.service. Sep 6 01:00:03.007737 systemd[1]: sshd@3-139.178.90.135:22-139.178.68.195:58940.service: Deactivated successfully. Sep 6 01:00:03.008306 systemd-logind[1704]: Session 6 logged out. Waiting for processes to exit. Sep 6 01:00:03.008368 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 01:00:03.008865 systemd-logind[1704]: Removed session 6. Sep 6 01:00:03.044271 sshd[1859]: Accepted publickey for core from 139.178.68.195 port 58948 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:00:03.044996 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:00:03.047414 systemd-logind[1704]: New session 7 of user core. Sep 6 01:00:03.047887 systemd[1]: Started session-7.scope. Sep 6 01:00:03.095791 sshd[1859]: pam_unix(sshd:session): session closed for user core Sep 6 01:00:03.099396 systemd[1]: Started sshd@5-139.178.90.135:22-139.178.68.195:58950.service. Sep 6 01:00:03.100717 systemd[1]: sshd@4-139.178.90.135:22-139.178.68.195:58948.service: Deactivated successfully. Sep 6 01:00:03.101316 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 01:00:03.101335 systemd-logind[1704]: Session 7 logged out. Waiting for processes to exit. Sep 6 01:00:03.101791 systemd-logind[1704]: Removed session 7. Sep 6 01:00:03.131371 sshd[1865]: Accepted publickey for core from 139.178.68.195 port 58950 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:00:03.132116 sshd[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:00:03.134946 systemd-logind[1704]: New session 8 of user core. Sep 6 01:00:03.135464 systemd[1]: Started session-8.scope. Sep 6 01:00:03.191645 sshd[1865]: pam_unix(sshd:session): session closed for user core Sep 6 01:00:03.197033 systemd[1]: Started sshd@6-139.178.90.135:22-139.178.68.195:58962.service. Sep 6 01:00:03.198830 systemd[1]: sshd@5-139.178.90.135:22-139.178.68.195:58950.service: Deactivated successfully. Sep 6 01:00:03.201301 systemd-logind[1704]: Session 8 logged out. Waiting for processes to exit. Sep 6 01:00:03.201406 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 01:00:03.202287 systemd-logind[1704]: Removed session 8. Sep 6 01:00:03.231269 sshd[1872]: Accepted publickey for core from 139.178.68.195 port 58962 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:00:03.231955 sshd[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:00:03.234360 systemd-logind[1704]: New session 9 of user core. Sep 6 01:00:03.234822 systemd[1]: Started session-9.scope. Sep 6 01:00:03.298485 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 01:00:03.298887 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:00:03.314906 dbus-daemon[1618]: \xd0\xed\xa2j\x9bU: received setenforce notice (enforcing=-2038429808) Sep 6 01:00:03.319449 sudo[1878]: pam_unix(sudo:session): session closed for user root Sep 6 01:00:03.324323 sshd[1872]: pam_unix(sshd:session): session closed for user core Sep 6 01:00:03.330136 systemd[1]: Started sshd@7-139.178.90.135:22-139.178.68.195:58978.service. Sep 6 01:00:03.332079 systemd[1]: sshd@6-139.178.90.135:22-139.178.68.195:58962.service: Deactivated successfully. Sep 6 01:00:03.334505 systemd-logind[1704]: Session 9 logged out. Waiting for processes to exit. Sep 6 01:00:03.334623 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 01:00:03.337154 systemd-logind[1704]: Removed session 9. Sep 6 01:00:03.414727 sshd[1880]: Accepted publickey for core from 139.178.68.195 port 58978 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:00:03.415886 sshd[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:00:03.419507 systemd-logind[1704]: New session 10 of user core. Sep 6 01:00:03.420341 systemd[1]: Started session-10.scope. Sep 6 01:00:03.477046 sudo[1887]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 01:00:03.477171 sudo[1887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:00:03.478823 sudo[1887]: pam_unix(sudo:session): session closed for user root Sep 6 01:00:03.481340 sudo[1886]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 6 01:00:03.481473 sudo[1886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:00:03.486931 systemd[1]: Stopping audit-rules.service... Sep 6 01:00:03.486000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 01:00:03.488005 auditctl[1890]: No rules Sep 6 01:00:03.488223 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 01:00:03.488369 systemd[1]: Stopped audit-rules.service. Sep 6 01:00:03.489285 systemd[1]: Starting audit-rules.service... Sep 6 01:00:03.493285 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 6 01:00:03.493326 kernel: audit: type=1305 audit(1757120403.486:140): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 6 01:00:03.501173 augenrules[1908]: No rules Sep 6 01:00:03.501565 systemd[1]: Finished audit-rules.service. Sep 6 01:00:03.502155 sudo[1886]: pam_unix(sudo:session): session closed for user root Sep 6 01:00:03.503058 sshd[1880]: pam_unix(sshd:session): session closed for user core Sep 6 01:00:03.504836 systemd[1]: Started sshd@8-139.178.90.135:22-139.178.68.195:58994.service. Sep 6 01:00:03.505244 systemd[1]: sshd@7-139.178.90.135:22-139.178.68.195:58978.service: Deactivated successfully. Sep 6 01:00:03.505819 systemd-logind[1704]: Session 10 logged out. Waiting for processes to exit. Sep 6 01:00:03.505881 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 01:00:03.506729 systemd-logind[1704]: Removed session 10. Sep 6 01:00:03.486000 audit[1890]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb19e2930 a2=420 a3=0 items=0 ppid=1 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.539904 kernel: audit: type=1300 audit(1757120403.486:140): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb19e2930 a2=420 a3=0 items=0 ppid=1 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.539977 kernel: audit: type=1327 audit(1757120403.486:140): proctitle=2F7362696E2F617564697463746C002D44 Sep 6 01:00:03.486000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 6 01:00:03.549426 kernel: audit: type=1131 audit(1757120403.486:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.571901 kernel: audit: type=1130 audit(1757120403.500:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.573847 sshd[1914]: Accepted publickey for core from 139.178.68.195 port 58994 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:00:03.575753 sshd[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:00:03.578076 systemd-logind[1704]: New session 11 of user core. Sep 6 01:00:03.578536 systemd[1]: Started session-11.scope. Sep 6 01:00:03.594354 kernel: audit: type=1106 audit(1757120403.500:143): pid=1886 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.500000 audit[1886]: USER_END pid=1886 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.620422 kernel: audit: type=1104 audit(1757120403.500:144): pid=1886 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.500000 audit[1886]: CRED_DISP pid=1886 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.625871 sudo[1919]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 01:00:03.625998 sudo[1919]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:00:03.638373 systemd[1]: Starting docker.service... Sep 6 01:00:03.644049 kernel: audit: type=1106 audit(1757120403.502:145): pid=1880 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:03.502000 audit[1880]: USER_END pid=1880 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:03.655158 env[1933]: time="2025-09-06T01:00:03.655104438Z" level=info msg="Starting up" Sep 6 01:00:03.656196 env[1933]: time="2025-09-06T01:00:03.656175686Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:00:03.656224 env[1933]: time="2025-09-06T01:00:03.656198241Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:00:03.656244 env[1933]: time="2025-09-06T01:00:03.656226412Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:00:03.656262 env[1933]: time="2025-09-06T01:00:03.656238990Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:00:03.657156 env[1933]: time="2025-09-06T01:00:03.657109354Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:00:03.657156 env[1933]: time="2025-09-06T01:00:03.657118913Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:00:03.657156 env[1933]: time="2025-09-06T01:00:03.657126060Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:00:03.657156 env[1933]: time="2025-09-06T01:00:03.657131078Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:00:03.502000 audit[1880]: CRED_DISP pid=1880 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:03.676438 kernel: audit: type=1104 audit(1757120403.502:146): pid=1880 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:03.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.90.135:22-139.178.68.195:58994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.728011 kernel: audit: type=1130 audit(1757120403.503:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.90.135:22-139.178.68.195:58994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.90.135:22-139.178.68.195:58978 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.572000 audit[1914]: USER_ACCT pid=1914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:03.574000 audit[1914]: CRED_ACQ pid=1914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:03.574000 audit[1914]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda8df5270 a2=3 a3=0 items=0 ppid=1 pid=1914 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.574000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:00:03.579000 audit[1914]: USER_START pid=1914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:03.580000 audit[1918]: CRED_ACQ pid=1918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:03.624000 audit[1919]: USER_ACCT pid=1919 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.624000 audit[1919]: CRED_REFR pid=1919 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.625000 audit[1919]: USER_START pid=1919 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:03.840816 env[1933]: time="2025-09-06T01:00:03.840721010Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 01:00:03.840816 env[1933]: time="2025-09-06T01:00:03.840766774Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 01:00:03.841261 env[1933]: time="2025-09-06T01:00:03.841136867Z" level=info msg="Loading containers: start." Sep 6 01:00:03.891000 audit[1975]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.891000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff9cceca90 a2=0 a3=7fff9cceca7c items=0 ppid=1933 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.891000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 6 01:00:03.892000 audit[1977]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.892000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd12f96820 a2=0 a3=7ffd12f9680c items=0 ppid=1933 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.892000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 6 01:00:03.893000 audit[1979]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.893000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff1fb30120 a2=0 a3=7fff1fb3010c items=0 ppid=1933 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.893000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 01:00:03.894000 audit[1981]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.894000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe11ba5a30 a2=0 a3=7ffe11ba5a1c items=0 ppid=1933 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.894000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 01:00:03.895000 audit[1983]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.895000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeae35b100 a2=0 a3=7ffeae35b0ec items=0 ppid=1933 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.895000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 6 01:00:03.922000 audit[1988]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.922000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffb8daa790 a2=0 a3=7fffb8daa77c items=0 ppid=1933 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.922000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 6 01:00:03.926000 audit[1990]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.926000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc41a7bcc0 a2=0 a3=7ffc41a7bcac items=0 ppid=1933 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.926000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 6 01:00:03.928000 audit[1992]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.928000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffed5a42620 a2=0 a3=7ffed5a4260c items=0 ppid=1933 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.928000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 6 01:00:03.929000 audit[1994]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.929000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc6870a490 a2=0 a3=7ffc6870a47c items=0 ppid=1933 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.929000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:00:03.934000 audit[1998]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.934000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff4daba2c0 a2=0 a3=7fff4daba2ac items=0 ppid=1933 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.934000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:00:03.947000 audit[1999]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:03.947000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffedeb195e0 a2=0 a3=7ffedeb195cc items=0 ppid=1933 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:03.947000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:00:03.970501 kernel: Initializing XFRM netlink socket Sep 6 01:00:04.037477 env[1933]: time="2025-09-06T01:00:04.037385268Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 01:00:04.056000 audit[2007]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.056000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffec770c9c0 a2=0 a3=7ffec770c9ac items=0 ppid=1933 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.056000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 6 01:00:04.089000 audit[2010]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.089000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc39337c60 a2=0 a3=7ffc39337c4c items=0 ppid=1933 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.089000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 6 01:00:04.097000 audit[2013]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.097000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff874a7fb0 a2=0 a3=7fff874a7f9c items=0 ppid=1933 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.097000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 6 01:00:04.103000 audit[2015]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.103000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff2ce20440 a2=0 a3=7fff2ce2042c items=0 ppid=1933 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.103000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 6 01:00:04.108000 audit[2017]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.108000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc3a441d10 a2=0 a3=7ffc3a441cfc items=0 ppid=1933 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.108000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 6 01:00:04.114000 audit[2019]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.114000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe47816300 a2=0 a3=7ffe478162ec items=0 ppid=1933 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.114000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 6 01:00:04.119000 audit[2021]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.119000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffed14e33d0 a2=0 a3=7ffed14e33bc items=0 ppid=1933 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.119000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 6 01:00:04.142000 audit[2024]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.142000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff02f189a0 a2=0 a3=7fff02f1898c items=0 ppid=1933 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.142000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 6 01:00:04.147000 audit[2026]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.147000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffcbda33430 a2=0 a3=7ffcbda3341c items=0 ppid=1933 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.147000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 6 01:00:04.152000 audit[2028]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.152000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd9ccc1b20 a2=0 a3=7ffd9ccc1b0c items=0 ppid=1933 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.152000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 6 01:00:04.157000 audit[2030]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.157000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd18b513f0 a2=0 a3=7ffd18b513dc items=0 ppid=1933 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.157000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 6 01:00:04.160112 systemd-networkd[1387]: docker0: Link UP Sep 6 01:00:04.172000 audit[2034]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.172000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd8df42b40 a2=0 a3=7ffd8df42b2c items=0 ppid=1933 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.172000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:00:04.185000 audit[2035]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:04.185000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffff29ec240 a2=0 a3=7ffff29ec22c items=0 ppid=1933 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:04.185000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 6 01:00:04.187586 env[1933]: time="2025-09-06T01:00:04.187537273Z" level=info msg="Loading containers: done." Sep 6 01:00:04.215262 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3356780331-merged.mount: Deactivated successfully. Sep 6 01:00:04.223461 env[1933]: time="2025-09-06T01:00:04.223338240Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 01:00:04.223782 env[1933]: time="2025-09-06T01:00:04.223730646Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 01:00:04.224042 env[1933]: time="2025-09-06T01:00:04.223952843Z" level=info msg="Daemon has completed initialization" Sep 6 01:00:04.247737 systemd[1]: Started docker.service. Sep 6 01:00:04.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:04.262215 env[1933]: time="2025-09-06T01:00:04.262095748Z" level=info msg="API listen on /run/docker.sock" Sep 6 01:00:05.400230 env[1665]: time="2025-09-06T01:00:05.400136497Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 01:00:06.112823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017285900.mount: Deactivated successfully. Sep 6 01:00:07.110211 env[1665]: time="2025-09-06T01:00:07.110157411Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:07.124674 env[1665]: time="2025-09-06T01:00:07.124631513Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:07.128845 env[1665]: time="2025-09-06T01:00:07.128800018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:07.131984 env[1665]: time="2025-09-06T01:00:07.131914500Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:07.133672 env[1665]: time="2025-09-06T01:00:07.133586375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 01:00:07.134299 env[1665]: time="2025-09-06T01:00:07.134228585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 01:00:07.662761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 01:00:07.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:07.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:07.663251 systemd[1]: Stopped kubelet.service. Sep 6 01:00:07.666590 systemd[1]: Starting kubelet.service... Sep 6 01:00:07.889654 systemd[1]: Started kubelet.service. Sep 6 01:00:07.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:07.909172 kubelet[2096]: E0906 01:00:07.909097 2096 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:00:07.910136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:00:07.910223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:00:07.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:00:08.528719 env[1665]: time="2025-09-06T01:00:08.528662806Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:08.529323 env[1665]: time="2025-09-06T01:00:08.529277701Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:08.530599 env[1665]: time="2025-09-06T01:00:08.530560175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:08.531519 env[1665]: time="2025-09-06T01:00:08.531482253Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:08.532053 env[1665]: time="2025-09-06T01:00:08.531997359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 01:00:08.532389 env[1665]: time="2025-09-06T01:00:08.532378063Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 01:00:09.663936 env[1665]: time="2025-09-06T01:00:09.663878340Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:09.664622 env[1665]: time="2025-09-06T01:00:09.664551930Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:09.665682 env[1665]: time="2025-09-06T01:00:09.665629182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:09.666668 env[1665]: time="2025-09-06T01:00:09.666623746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:09.667184 env[1665]: time="2025-09-06T01:00:09.667134535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 01:00:09.667506 env[1665]: time="2025-09-06T01:00:09.667477951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 01:00:10.629087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158583997.mount: Deactivated successfully. Sep 6 01:00:11.020689 env[1665]: time="2025-09-06T01:00:11.020663995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:11.021303 env[1665]: time="2025-09-06T01:00:11.021290411Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:11.021883 env[1665]: time="2025-09-06T01:00:11.021870155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:11.022424 env[1665]: time="2025-09-06T01:00:11.022410097Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:11.022730 env[1665]: time="2025-09-06T01:00:11.022703299Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 01:00:11.023102 env[1665]: time="2025-09-06T01:00:11.023068824Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 01:00:11.577103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129699476.mount: Deactivated successfully. Sep 6 01:00:12.333698 env[1665]: time="2025-09-06T01:00:12.333644894Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:12.334315 env[1665]: time="2025-09-06T01:00:12.334266474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:12.335359 env[1665]: time="2025-09-06T01:00:12.335302672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:12.336268 env[1665]: time="2025-09-06T01:00:12.336213005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:12.336793 env[1665]: time="2025-09-06T01:00:12.336742661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 01:00:12.337221 env[1665]: time="2025-09-06T01:00:12.337185439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 01:00:12.876955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126824076.mount: Deactivated successfully. Sep 6 01:00:12.877874 env[1665]: time="2025-09-06T01:00:12.877830486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:12.878393 env[1665]: time="2025-09-06T01:00:12.878377800Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:12.879043 env[1665]: time="2025-09-06T01:00:12.879030833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:12.879699 env[1665]: time="2025-09-06T01:00:12.879686653Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:12.879991 env[1665]: time="2025-09-06T01:00:12.879953469Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 01:00:12.880283 env[1665]: time="2025-09-06T01:00:12.880271139Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 01:00:13.410255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976991217.mount: Deactivated successfully. Sep 6 01:00:15.039813 env[1665]: time="2025-09-06T01:00:15.039732992Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:15.040747 env[1665]: time="2025-09-06T01:00:15.040704130Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:15.041827 env[1665]: time="2025-09-06T01:00:15.041787080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:15.042914 env[1665]: time="2025-09-06T01:00:15.042875931Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:15.043381 env[1665]: time="2025-09-06T01:00:15.043329839Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 01:00:17.338731 systemd[1]: Stopped kubelet.service. Sep 6 01:00:17.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:17.339980 systemd[1]: Starting kubelet.service... Sep 6 01:00:17.344314 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 6 01:00:17.344357 kernel: audit: type=1130 audit(1757120417.338:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:17.353331 systemd[1]: Reloading. Sep 6 01:00:17.385953 /usr/lib/systemd/system-generators/torcx-generator[2183]: time="2025-09-06T01:00:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:00:17.385969 /usr/lib/systemd/system-generators/torcx-generator[2183]: time="2025-09-06T01:00:17Z" level=info msg="torcx already run" Sep 6 01:00:17.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:17.399422 kernel: audit: type=1131 audit(1757120417.338:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:17.482878 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:00:17.482885 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:00:17.501354 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:00:17.567044 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 01:00:17.567087 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 01:00:17.567219 systemd[1]: Stopped kubelet.service. Sep 6 01:00:17.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:00:17.568081 systemd[1]: Starting kubelet.service... Sep 6 01:00:17.624601 kernel: audit: type=1130 audit(1757120417.566:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 6 01:00:17.797582 systemd[1]: Started kubelet.service. Sep 6 01:00:17.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:17.820029 kubelet[2258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:00:17.820029 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 01:00:17.820029 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:00:17.820307 kubelet[2258]: I0906 01:00:17.820030 2258 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:00:17.857477 kernel: audit: type=1130 audit(1757120417.796:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:18.001171 kubelet[2258]: I0906 01:00:18.001124 2258 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 01:00:18.001171 kubelet[2258]: I0906 01:00:18.001138 2258 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:00:18.001258 kubelet[2258]: I0906 01:00:18.001252 2258 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 01:00:18.019437 kubelet[2258]: E0906 01:00:18.019396 2258 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.90.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:00:18.019997 kubelet[2258]: I0906 01:00:18.019954 2258 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:00:18.027120 kubelet[2258]: E0906 01:00:18.027043 2258 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:00:18.027120 kubelet[2258]: I0906 01:00:18.027070 2258 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:00:18.047549 kubelet[2258]: I0906 01:00:18.047510 2258 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:00:18.055383 kubelet[2258]: I0906 01:00:18.055322 2258 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 01:00:18.055445 kubelet[2258]: I0906 01:00:18.055401 2258 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:00:18.055598 kubelet[2258]: I0906 01:00:18.055470 2258 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-4cc2a8c2f2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 01:00:18.055715 kubelet[2258]: I0906 01:00:18.055605 2258 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:00:18.055715 kubelet[2258]: I0906 01:00:18.055614 2258 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 01:00:18.055715 kubelet[2258]: I0906 01:00:18.055678 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:00:18.071475 kubelet[2258]: I0906 01:00:18.071414 2258 kubelet.go:408] "Attempting to sync node with API server" Sep 6 01:00:18.071475 kubelet[2258]: I0906 01:00:18.071448 2258 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:00:18.071584 kubelet[2258]: I0906 01:00:18.071485 2258 kubelet.go:314] "Adding apiserver pod source" Sep 6 01:00:18.071584 kubelet[2258]: I0906 01:00:18.071503 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:00:18.075170 kubelet[2258]: W0906 01:00:18.075092 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.90.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-4cc2a8c2f2&limit=500&resourceVersion=0": dial tcp 139.178.90.135:6443: connect: connection refused Sep 6 01:00:18.075246 kubelet[2258]: E0906 01:00:18.075168 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.90.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-4cc2a8c2f2&limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:00:18.076532 kubelet[2258]: W0906 01:00:18.076461 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.90.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.90.135:6443: connect: connection refused Sep 6 01:00:18.076532 kubelet[2258]: E0906 01:00:18.076512 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.90.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:00:18.083760 kubelet[2258]: I0906 01:00:18.083694 2258 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:00:18.084481 kubelet[2258]: I0906 01:00:18.084445 2258 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:00:18.085482 kubelet[2258]: W0906 01:00:18.085445 2258 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 01:00:18.088679 kubelet[2258]: I0906 01:00:18.088627 2258 server.go:1274] "Started kubelet" Sep 6 01:00:18.089001 kubelet[2258]: I0906 01:00:18.088866 2258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:00:18.089182 kubelet[2258]: I0906 01:00:18.089040 2258 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:00:18.089527 kubelet[2258]: I0906 01:00:18.089464 2258 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:00:18.089000 audit[2258]: AVC avc: denied { mac_admin } for pid=2258 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:18.091409 kubelet[2258]: I0906 01:00:18.091258 2258 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 01:00:18.091409 kubelet[2258]: I0906 01:00:18.091339 2258 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 01:00:18.091584 kubelet[2258]: I0906 01:00:18.091462 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:00:18.091693 kubelet[2258]: I0906 01:00:18.091556 2258 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:00:18.106274 kubelet[2258]: I0906 01:00:18.106225 2258 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 01:00:18.106674 kubelet[2258]: E0906 01:00:18.105871 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:18.106674 kubelet[2258]: E0906 01:00:18.106613 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-4cc2a8c2f2?timeout=10s\": dial tcp 139.178.90.135:6443: connect: connection refused" interval="200ms" Sep 6 01:00:18.106955 kubelet[2258]: I0906 01:00:18.105870 2258 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 01:00:18.107359 kubelet[2258]: I0906 01:00:18.107289 2258 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:00:18.107526 kubelet[2258]: E0906 01:00:18.107368 2258 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:00:18.107749 kubelet[2258]: I0906 01:00:18.107713 2258 server.go:449] "Adding debug handlers to kubelet server" Sep 6 01:00:18.107930 kubelet[2258]: I0906 01:00:18.107894 2258 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:00:18.108226 kubelet[2258]: W0906 01:00:18.108078 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.90.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.135:6443: connect: connection refused Sep 6 01:00:18.108367 kubelet[2258]: E0906 01:00:18.108279 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.90.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:00:18.110719 kubelet[2258]: I0906 01:00:18.110672 2258 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:00:18.112259 kubelet[2258]: I0906 01:00:18.112236 2258 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:00:18.116215 kubelet[2258]: E0906 01:00:18.110814 2258 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.90.135:6443/api/v1/namespaces/default/events\": dial tcp 139.178.90.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-4cc2a8c2f2.18628bb5ebc1a4fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-4cc2a8c2f2,UID:ci-3510.3.8-n-4cc2a8c2f2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-4cc2a8c2f2,},FirstTimestamp:2025-09-06 01:00:18.088592638 +0000 UTC m=+0.288269375,LastTimestamp:2025-09-06 01:00:18.088592638 +0000 UTC m=+0.288269375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-4cc2a8c2f2,}" Sep 6 01:00:18.089000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:18.182433 kernel: audit: type=1400 audit(1757120418.089:190): avc: denied { mac_admin } for pid=2258 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:18.182485 kernel: audit: type=1401 audit(1757120418.089:190): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:18.182501 kernel: audit: type=1300 audit(1757120418.089:190): arch=c000003e syscall=188 success=no exit=-22 a0=c000ea8ba0 a1=c000a29c20 a2=c000ea8b70 a3=25 items=0 ppid=1 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.089000 audit[2258]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ea8ba0 a1=c000a29c20 a2=c000ea8b70 a3=25 items=0 ppid=1 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.190340 kubelet[2258]: I0906 01:00:18.190327 2258 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 01:00:18.190340 kubelet[2258]: I0906 01:00:18.190339 2258 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 01:00:18.190404 kubelet[2258]: I0906 01:00:18.190347 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:00:18.191225 kubelet[2258]: I0906 01:00:18.191191 2258 policy_none.go:49] "None policy: Start" Sep 6 01:00:18.191386 kubelet[2258]: I0906 01:00:18.191379 2258 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 01:00:18.191422 kubelet[2258]: I0906 01:00:18.191391 2258 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:00:18.207300 kubelet[2258]: E0906 01:00:18.207262 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:18.274146 kernel: audit: type=1327 audit(1757120418.089:190): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:00:18.089000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:00:18.307429 kubelet[2258]: E0906 01:00:18.307360 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:18.307631 kubelet[2258]: E0906 01:00:18.307587 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-4cc2a8c2f2?timeout=10s\": dial tcp 139.178.90.135:6443: connect: connection refused" interval="400ms" Sep 6 01:00:18.364708 kernel: audit: type=1400 audit(1757120418.089:191): avc: denied { mac_admin } for pid=2258 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:18.089000 audit[2258]: AVC avc: denied { mac_admin } for pid=2258 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:18.408194 kubelet[2258]: E0906 01:00:18.408145 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:18.427469 kernel: audit: type=1401 audit(1757120418.089:191): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:18.089000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:18.089000 audit[2258]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000493de0 a1=c000a29c38 a2=c000ea8c30 a3=25 items=0 ppid=1 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.089000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:00:18.110000 audit[2283]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:18.110000 audit[2283]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff7865dd80 a2=0 a3=7fff7865dd6c items=0 ppid=2258 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 01:00:18.112000 audit[2284]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:18.112000 audit[2284]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc001c4440 a2=0 a3=7ffc001c442c items=0 ppid=2258 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 01:00:18.114000 audit[2286]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:18.114000 audit[2286]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff968bff40 a2=0 a3=7fff968bff2c items=0 ppid=2258 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.114000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 01:00:18.116000 audit[2288]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:18.116000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcd0bb7860 a2=0 a3=7ffcd0bb784c items=0 ppid=2258 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 01:00:18.461087 kubelet[2258]: I0906 01:00:18.461077 2258 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:00:18.459000 audit[2258]: AVC avc: denied { mac_admin } for pid=2258 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:18.459000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:18.459000 audit[2258]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000831710 a1=c00114eea0 a2=c0008316e0 a3=25 items=0 ppid=1 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.459000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:00:18.459000 audit[2293]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2293 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:18.459000 audit[2293]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff9ecf4c90 a2=0 a3=7fff9ecf4c7c items=0 ppid=2258 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 6 01:00:18.461412 kubelet[2258]: I0906 01:00:18.461106 2258 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 01:00:18.461412 kubelet[2258]: I0906 01:00:18.461166 2258 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:00:18.461412 kubelet[2258]: I0906 01:00:18.461174 2258 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:00:18.461412 kubelet[2258]: I0906 01:00:18.461276 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:00:18.461412 kubelet[2258]: I0906 01:00:18.461274 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:00:18.461706 kubelet[2258]: E0906 01:00:18.461698 2258 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:18.460000 audit[2295]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:18.460000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc96ce3f60 a2=0 a3=7ffc96ce3f4c items=0 ppid=2258 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 6 01:00:18.460000 audit[2296]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2296 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:18.460000 audit[2296]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff89801d80 a2=0 a3=7fff89801d6c items=0 ppid=2258 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 01:00:18.461926 kubelet[2258]: I0906 01:00:18.461844 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:00:18.461926 kubelet[2258]: I0906 01:00:18.461855 2258 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 01:00:18.461926 kubelet[2258]: I0906 01:00:18.461864 2258 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 01:00:18.461926 kubelet[2258]: E0906 01:00:18.461887 2258 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 6 01:00:18.460000 audit[2297]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:18.460000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed847c4d0 a2=0 a3=7ffed847c4bc items=0 ppid=2258 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 01:00:18.460000 audit[2298]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:18.460000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff839585c0 a2=0 a3=7fff839585ac items=0 ppid=2258 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 6 01:00:18.461000 audit[2299]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:18.461000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffceaf9af70 a2=0 a3=7ffceaf9af5c items=0 ppid=2258 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 01:00:18.461000 audit[2300]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:18.461000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffddbb64450 a2=0 a3=7ffddbb6443c items=0 ppid=2258 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 6 01:00:18.461000 audit[2301]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:18.461000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff5400a540 a2=0 a3=7fff5400a52c items=0 ppid=2258 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:18.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 6 01:00:18.471531 kubelet[2258]: W0906 01:00:18.471476 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.90.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.135:6443: connect: connection refused Sep 6 01:00:18.471531 kubelet[2258]: E0906 01:00:18.471507 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.90.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:00:18.565259 kubelet[2258]: I0906 01:00:18.565089 2258 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.565849 kubelet[2258]: E0906 01:00:18.565770 2258 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.90.135:6443/api/v1/nodes\": dial tcp 139.178.90.135:6443: connect: connection refused" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.611494 kubelet[2258]: I0906 01:00:18.611394 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43729c3d3a155a7768cbf0838a552de5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"43729c3d3a155a7768cbf0838a552de5\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.611682 kubelet[2258]: I0906 01:00:18.611511 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dc941a61fd689a6bd7434f75da9ea05-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"8dc941a61fd689a6bd7434f75da9ea05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.611682 kubelet[2258]: I0906 01:00:18.611608 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dc941a61fd689a6bd7434f75da9ea05-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"8dc941a61fd689a6bd7434f75da9ea05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.611914 kubelet[2258]: I0906 01:00:18.611737 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dc941a61fd689a6bd7434f75da9ea05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"8dc941a61fd689a6bd7434f75da9ea05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.611914 kubelet[2258]: I0906 01:00:18.611793 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.611914 kubelet[2258]: I0906 01:00:18.611874 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.612192 kubelet[2258]: I0906 01:00:18.611928 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.612192 kubelet[2258]: I0906 01:00:18.611976 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.612192 kubelet[2258]: I0906 01:00:18.612067 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.708703 kubelet[2258]: E0906 01:00:18.708592 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-4cc2a8c2f2?timeout=10s\": dial tcp 139.178.90.135:6443: connect: connection refused" interval="800ms" Sep 6 01:00:18.770248 kubelet[2258]: I0906 01:00:18.770151 2258 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.770935 kubelet[2258]: E0906 01:00:18.770840 2258 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.90.135:6443/api/v1/nodes\": dial tcp 139.178.90.135:6443: connect: connection refused" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:18.875118 env[1665]: time="2025-09-06T01:00:18.874916335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2,Uid:8dc941a61fd689a6bd7434f75da9ea05,Namespace:kube-system,Attempt:0,}" Sep 6 01:00:18.878997 env[1665]: time="2025-09-06T01:00:18.878908651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2,Uid:e6eaa0c7497bfad157f0e155b1afb238,Namespace:kube-system,Attempt:0,}" Sep 6 01:00:18.882086 env[1665]: time="2025-09-06T01:00:18.882017011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-4cc2a8c2f2,Uid:43729c3d3a155a7768cbf0838a552de5,Namespace:kube-system,Attempt:0,}" Sep 6 01:00:18.960898 kubelet[2258]: W0906 01:00:18.960756 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.90.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.135:6443: connect: connection refused Sep 6 01:00:18.961683 kubelet[2258]: E0906 01:00:18.960882 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.90.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:00:19.174493 kubelet[2258]: I0906 01:00:19.174403 2258 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:19.175156 kubelet[2258]: E0906 01:00:19.175057 2258 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.90.135:6443/api/v1/nodes\": dial tcp 139.178.90.135:6443: connect: connection refused" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:19.333119 kubelet[2258]: W0906 01:00:19.332943 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.90.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-4cc2a8c2f2&limit=500&resourceVersion=0": dial tcp 139.178.90.135:6443: connect: connection refused Sep 6 01:00:19.333119 kubelet[2258]: E0906 01:00:19.333106 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.90.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-4cc2a8c2f2&limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:00:19.389501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2651970787.mount: Deactivated successfully. Sep 6 01:00:19.389991 env[1665]: time="2025-09-06T01:00:19.389937339Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.391015 env[1665]: time="2025-09-06T01:00:19.390973644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.391668 env[1665]: time="2025-09-06T01:00:19.391626838Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.392295 env[1665]: time="2025-09-06T01:00:19.392256895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.393132 env[1665]: time="2025-09-06T01:00:19.393093106Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.394289 env[1665]: time="2025-09-06T01:00:19.394250247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.394692 env[1665]: time="2025-09-06T01:00:19.394650468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.396197 env[1665]: time="2025-09-06T01:00:19.396156323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.397080 env[1665]: time="2025-09-06T01:00:19.397046351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.397940 env[1665]: time="2025-09-06T01:00:19.397893203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.398374 env[1665]: time="2025-09-06T01:00:19.398362577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.398850 env[1665]: time="2025-09-06T01:00:19.398809529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:19.403831 env[1665]: time="2025-09-06T01:00:19.403795074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:00:19.403831 env[1665]: time="2025-09-06T01:00:19.403816201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:00:19.403831 env[1665]: time="2025-09-06T01:00:19.403823188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:00:19.403991 env[1665]: time="2025-09-06T01:00:19.403893256Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d5cab31fa920d5ee8067bc3b1c4bc0a0990a7605872250d3a8912912eff4a77 pid=2316 runtime=io.containerd.runc.v2 Sep 6 01:00:19.403991 env[1665]: time="2025-09-06T01:00:19.403966038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:00:19.403991 env[1665]: time="2025-09-06T01:00:19.403982540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:00:19.404067 env[1665]: time="2025-09-06T01:00:19.403989855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:00:19.404067 env[1665]: time="2025-09-06T01:00:19.404048399Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6d5bba1ea65ecb4e4d5c2d23f69cbfd91d2615515af2b68ea7cd02a2c51f591 pid=2317 runtime=io.containerd.runc.v2 Sep 6 01:00:19.405486 env[1665]: time="2025-09-06T01:00:19.405443537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:00:19.405486 env[1665]: time="2025-09-06T01:00:19.405468893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:00:19.405486 env[1665]: time="2025-09-06T01:00:19.405480642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:00:19.405615 env[1665]: time="2025-09-06T01:00:19.405560876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fa1ee46fd3a625f0102f569be750b3749706ef7dd739c6b00adbd593b1c7baf pid=2343 runtime=io.containerd.runc.v2 Sep 6 01:00:19.428662 kubelet[2258]: W0906 01:00:19.428591 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.90.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.90.135:6443: connect: connection refused Sep 6 01:00:19.428662 kubelet[2258]: E0906 01:00:19.428638 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.90.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:00:19.433117 env[1665]: time="2025-09-06T01:00:19.433089131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-4cc2a8c2f2,Uid:43729c3d3a155a7768cbf0838a552de5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6d5bba1ea65ecb4e4d5c2d23f69cbfd91d2615515af2b68ea7cd02a2c51f591\"" Sep 6 01:00:19.433691 env[1665]: time="2025-09-06T01:00:19.433677012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2,Uid:8dc941a61fd689a6bd7434f75da9ea05,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d5cab31fa920d5ee8067bc3b1c4bc0a0990a7605872250d3a8912912eff4a77\"" Sep 6 01:00:19.434500 env[1665]: time="2025-09-06T01:00:19.434483880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2,Uid:e6eaa0c7497bfad157f0e155b1afb238,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fa1ee46fd3a625f0102f569be750b3749706ef7dd739c6b00adbd593b1c7baf\"" Sep 6 01:00:19.434603 env[1665]: time="2025-09-06T01:00:19.434588382Z" level=info msg="CreateContainer within sandbox \"f6d5bba1ea65ecb4e4d5c2d23f69cbfd91d2615515af2b68ea7cd02a2c51f591\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 01:00:19.434650 env[1665]: time="2025-09-06T01:00:19.434604325Z" level=info msg="CreateContainer within sandbox \"7d5cab31fa920d5ee8067bc3b1c4bc0a0990a7605872250d3a8912912eff4a77\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 01:00:19.436166 env[1665]: time="2025-09-06T01:00:19.436152294Z" level=info msg="CreateContainer within sandbox \"1fa1ee46fd3a625f0102f569be750b3749706ef7dd739c6b00adbd593b1c7baf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 01:00:19.440898 env[1665]: time="2025-09-06T01:00:19.440880675Z" level=info msg="CreateContainer within sandbox \"f6d5bba1ea65ecb4e4d5c2d23f69cbfd91d2615515af2b68ea7cd02a2c51f591\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5147accd33fed9ee881de3f78db82b342c7827762c5c3839de3da2a332180e03\"" Sep 6 01:00:19.441165 env[1665]: time="2025-09-06T01:00:19.441117629Z" level=info msg="StartContainer for \"5147accd33fed9ee881de3f78db82b342c7827762c5c3839de3da2a332180e03\"" Sep 6 01:00:19.442312 env[1665]: time="2025-09-06T01:00:19.442298562Z" level=info msg="CreateContainer within sandbox \"7d5cab31fa920d5ee8067bc3b1c4bc0a0990a7605872250d3a8912912eff4a77\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7c371e0ce9b64dca1bff776d751605f00b8077de1256ff114bee2f9a882336b\"" Sep 6 01:00:19.442471 env[1665]: time="2025-09-06T01:00:19.442456722Z" level=info msg="StartContainer for \"a7c371e0ce9b64dca1bff776d751605f00b8077de1256ff114bee2f9a882336b\"" Sep 6 01:00:19.443183 env[1665]: time="2025-09-06T01:00:19.443166537Z" level=info msg="CreateContainer within sandbox \"1fa1ee46fd3a625f0102f569be750b3749706ef7dd739c6b00adbd593b1c7baf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bda7b4ef84e9de187686454d150f6f4b0d7e453808555a615e73bbd5e94b64d5\"" Sep 6 01:00:19.443349 env[1665]: time="2025-09-06T01:00:19.443337430Z" level=info msg="StartContainer for \"bda7b4ef84e9de187686454d150f6f4b0d7e453808555a615e73bbd5e94b64d5\"" Sep 6 01:00:19.475055 env[1665]: time="2025-09-06T01:00:19.475019867Z" level=info msg="StartContainer for \"5147accd33fed9ee881de3f78db82b342c7827762c5c3839de3da2a332180e03\" returns successfully" Sep 6 01:00:19.475165 env[1665]: time="2025-09-06T01:00:19.475092934Z" level=info msg="StartContainer for \"a7c371e0ce9b64dca1bff776d751605f00b8077de1256ff114bee2f9a882336b\" returns successfully" Sep 6 01:00:19.475165 env[1665]: time="2025-09-06T01:00:19.475132798Z" level=info msg="StartContainer for \"bda7b4ef84e9de187686454d150f6f4b0d7e453808555a615e73bbd5e94b64d5\" returns successfully" Sep 6 01:00:19.490350 kubelet[2258]: W0906 01:00:19.490288 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.90.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.135:6443: connect: connection refused Sep 6 01:00:19.490350 kubelet[2258]: E0906 01:00:19.490331 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.90.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:00:19.977070 kubelet[2258]: I0906 01:00:19.977027 2258 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:20.118067 kubelet[2258]: E0906 01:00:20.118046 2258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-4cc2a8c2f2\" not found" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:20.244182 kubelet[2258]: I0906 01:00:20.244095 2258 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:20.244338 kubelet[2258]: E0906 01:00:20.244322 2258 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-4cc2a8c2f2\": node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:20.251421 kubelet[2258]: E0906 01:00:20.251403 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:20.352460 kubelet[2258]: E0906 01:00:20.352345 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:20.453598 kubelet[2258]: E0906 01:00:20.453504 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:20.554097 kubelet[2258]: E0906 01:00:20.553906 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:20.654849 kubelet[2258]: E0906 01:00:20.654742 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:20.755336 kubelet[2258]: E0906 01:00:20.755250 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:20.855524 kubelet[2258]: E0906 01:00:20.855355 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:20.956650 kubelet[2258]: E0906 01:00:20.956570 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:21.057048 kubelet[2258]: E0906 01:00:21.056956 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:21.157295 kubelet[2258]: E0906 01:00:21.157113 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:21.258197 kubelet[2258]: E0906 01:00:21.258098 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:21.359055 kubelet[2258]: E0906 01:00:21.358961 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:21.460119 kubelet[2258]: E0906 01:00:21.460029 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:21.493307 kubelet[2258]: W0906 01:00:21.493253 2258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:00:21.494202 kubelet[2258]: W0906 01:00:21.494168 2258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:00:21.494392 kubelet[2258]: W0906 01:00:21.494285 2258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:00:22.074226 kubelet[2258]: I0906 01:00:22.074134 2258 apiserver.go:52] "Watching apiserver" Sep 6 01:00:22.107787 kubelet[2258]: I0906 01:00:22.107678 2258 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 01:00:22.491619 kubelet[2258]: W0906 01:00:22.491557 2258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:00:22.491884 kubelet[2258]: E0906 01:00:22.491676 2258 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:22.743061 systemd[1]: Reloading. Sep 6 01:00:22.777790 /usr/lib/systemd/system-generators/torcx-generator[2589]: time="2025-09-06T01:00:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:00:22.777805 /usr/lib/systemd/system-generators/torcx-generator[2589]: time="2025-09-06T01:00:22Z" level=info msg="torcx already run" Sep 6 01:00:22.865416 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:00:22.865431 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:00:22.880396 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:00:22.936709 systemd[1]: Stopping kubelet.service... Sep 6 01:00:22.960183 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:00:22.960812 systemd[1]: Stopped kubelet.service. Sep 6 01:00:22.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:22.964806 systemd[1]: Starting kubelet.service... Sep 6 01:00:22.987199 kernel: kauditd_printk_skb: 42 callbacks suppressed Sep 6 01:00:22.987240 kernel: audit: type=1131 audit(1757120422.959:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:23.199163 systemd[1]: Started kubelet.service. Sep 6 01:00:23.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:23.219809 kubelet[2664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:00:23.219809 kubelet[2664]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 01:00:23.219809 kubelet[2664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:00:23.220057 kubelet[2664]: I0906 01:00:23.219837 2664 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:00:23.223132 kubelet[2664]: I0906 01:00:23.223091 2664 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 01:00:23.223132 kubelet[2664]: I0906 01:00:23.223101 2664 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:00:23.223248 kubelet[2664]: I0906 01:00:23.223218 2664 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 01:00:23.224225 kubelet[2664]: I0906 01:00:23.224211 2664 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 01:00:23.225493 kubelet[2664]: I0906 01:00:23.225482 2664 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:00:23.227080 kubelet[2664]: E0906 01:00:23.227066 2664 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:00:23.227119 kubelet[2664]: I0906 01:00:23.227080 2664 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:00:23.264497 kernel: audit: type=1130 audit(1757120423.198:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:23.290521 kubelet[2664]: I0906 01:00:23.290491 2664 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:00:23.291091 kubelet[2664]: I0906 01:00:23.291040 2664 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 01:00:23.291249 kubelet[2664]: I0906 01:00:23.291177 2664 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:00:23.291509 kubelet[2664]: I0906 01:00:23.291218 2664 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-4cc2a8c2f2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 01:00:23.291509 kubelet[2664]: I0906 01:00:23.291489 2664 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:00:23.291509 kubelet[2664]: I0906 01:00:23.291506 2664 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 01:00:23.291888 kubelet[2664]: I0906 01:00:23.291543 2664 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:00:23.291888 kubelet[2664]: I0906 01:00:23.291657 2664 kubelet.go:408] "Attempting to sync node with API server" Sep 6 01:00:23.291888 kubelet[2664]: I0906 01:00:23.291675 2664 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:00:23.291888 kubelet[2664]: I0906 01:00:23.291710 2664 kubelet.go:314] "Adding apiserver pod source" Sep 6 01:00:23.291888 kubelet[2664]: I0906 01:00:23.291723 2664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:00:23.293087 kubelet[2664]: I0906 01:00:23.293048 2664 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:00:23.294063 kubelet[2664]: I0906 01:00:23.294029 2664 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:00:23.294817 kubelet[2664]: I0906 01:00:23.294795 2664 server.go:1274] "Started kubelet" Sep 6 01:00:23.294921 kubelet[2664]: I0906 01:00:23.294858 2664 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:00:23.294997 kubelet[2664]: I0906 01:00:23.294893 2664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:00:23.295178 kubelet[2664]: I0906 01:00:23.295156 2664 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:00:23.295000 audit[2664]: AVC avc: denied { mac_admin } for pid=2664 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:23.296151 kubelet[2664]: I0906 01:00:23.296044 2664 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 6 01:00:23.296151 kubelet[2664]: I0906 01:00:23.296088 2664 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 6 01:00:23.296151 kubelet[2664]: I0906 01:00:23.296113 2664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:00:23.296151 kubelet[2664]: I0906 01:00:23.296136 2664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:00:23.296433 kubelet[2664]: I0906 01:00:23.296180 2664 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 01:00:23.296433 kubelet[2664]: E0906 01:00:23.296234 2664 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4cc2a8c2f2\" not found" Sep 6 01:00:23.296433 kubelet[2664]: I0906 01:00:23.296262 2664 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 01:00:23.296652 kubelet[2664]: I0906 01:00:23.296479 2664 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:00:23.296926 kubelet[2664]: I0906 01:00:23.296900 2664 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:00:23.297113 kubelet[2664]: I0906 01:00:23.296901 2664 server.go:449] "Adding debug handlers to kubelet server" Sep 6 01:00:23.297219 kubelet[2664]: E0906 01:00:23.296997 2664 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:00:23.297219 kubelet[2664]: I0906 01:00:23.297172 2664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:00:23.299169 kubelet[2664]: I0906 01:00:23.299156 2664 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:00:23.303386 kubelet[2664]: I0906 01:00:23.303361 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:00:23.303856 kubelet[2664]: I0906 01:00:23.303837 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:00:23.303938 kubelet[2664]: I0906 01:00:23.303859 2664 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 01:00:23.303938 kubelet[2664]: I0906 01:00:23.303871 2664 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 01:00:23.303938 kubelet[2664]: E0906 01:00:23.303894 2664 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:00:23.318523 kubelet[2664]: I0906 01:00:23.318506 2664 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 01:00:23.318523 kubelet[2664]: I0906 01:00:23.318519 2664 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 01:00:23.318645 kubelet[2664]: I0906 01:00:23.318532 2664 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:00:23.318645 kubelet[2664]: I0906 01:00:23.318640 2664 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 01:00:23.318714 kubelet[2664]: I0906 01:00:23.318649 2664 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 01:00:23.318714 kubelet[2664]: I0906 01:00:23.318668 2664 policy_none.go:49] "None policy: Start" Sep 6 01:00:23.319090 kubelet[2664]: I0906 01:00:23.319081 2664 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 01:00:23.319130 kubelet[2664]: I0906 01:00:23.319094 2664 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:00:23.319188 kubelet[2664]: I0906 01:00:23.319181 2664 state_mem.go:75] "Updated machine memory state" Sep 6 01:00:23.319763 kubelet[2664]: I0906 01:00:23.319754 2664 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:00:23.319803 kubelet[2664]: I0906 01:00:23.319790 2664 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 6 01:00:23.319889 kubelet[2664]: I0906 01:00:23.319881 2664 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:00:23.319926 kubelet[2664]: I0906 01:00:23.319888 2664 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:00:23.320118 kubelet[2664]: I0906 01:00:23.320098 2664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:00:23.295000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:23.392347 kernel: audit: type=1400 audit(1757120423.295:207): avc: denied { mac_admin } for pid=2664 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:23.392382 kernel: audit: type=1401 audit(1757120423.295:207): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:23.392395 kernel: audit: type=1300 audit(1757120423.295:207): arch=c000003e syscall=188 success=no exit=-22 a0=c000a99080 a1=c00059b290 a2=c000a99050 a3=25 items=0 ppid=1 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:23.295000 audit[2664]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a99080 a1=c00059b290 a2=c000a99050 a3=25 items=0 ppid=1 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:23.410068 kubelet[2664]: W0906 01:00:23.410030 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:00:23.410068 kubelet[2664]: W0906 01:00:23.410039 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:00:23.410068 kubelet[2664]: E0906 01:00:23.410067 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.410160 kubelet[2664]: E0906 01:00:23.410067 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.8-n-4cc2a8c2f2\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.410631 kubelet[2664]: W0906 01:00:23.410595 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:00:23.410631 kubelet[2664]: E0906 01:00:23.410616 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.422152 kubelet[2664]: I0906 01:00:23.422113 2664 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.428431 kubelet[2664]: I0906 01:00:23.428422 2664 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.428468 kubelet[2664]: I0906 01:00:23.428456 2664 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.487080 kernel: audit: type=1327 audit(1757120423.295:207): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:00:23.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:00:23.497152 kubelet[2664]: I0906 01:00:23.497112 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dc941a61fd689a6bd7434f75da9ea05-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"8dc941a61fd689a6bd7434f75da9ea05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.578653 kernel: audit: type=1400 audit(1757120423.295:208): avc: denied { mac_admin } for pid=2664 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:23.295000 audit[2664]: AVC avc: denied { mac_admin } for pid=2664 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:23.597971 kubelet[2664]: I0906 01:00:23.597921 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43729c3d3a155a7768cbf0838a552de5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"43729c3d3a155a7768cbf0838a552de5\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.597971 kubelet[2664]: I0906 01:00:23.597949 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.597971 kubelet[2664]: I0906 01:00:23.597961 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.597971 kubelet[2664]: I0906 01:00:23.597973 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dc941a61fd689a6bd7434f75da9ea05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"8dc941a61fd689a6bd7434f75da9ea05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.598068 kubelet[2664]: I0906 01:00:23.597982 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.598068 kubelet[2664]: I0906 01:00:23.597992 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.598068 kubelet[2664]: I0906 01:00:23.598003 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6eaa0c7497bfad157f0e155b1afb238-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"e6eaa0c7497bfad157f0e155b1afb238\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.598068 kubelet[2664]: I0906 01:00:23.598047 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dc941a61fd689a6bd7434f75da9ea05-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2\" (UID: \"8dc941a61fd689a6bd7434f75da9ea05\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:23.641750 kernel: audit: type=1401 audit(1757120423.295:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:23.295000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:23.673933 kernel: audit: type=1300 audit(1757120423.295:208): arch=c000003e syscall=188 success=no exit=-22 a0=c000501720 a1=c00059b2a8 a2=c000a99110 a3=25 items=0 ppid=1 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:23.295000 audit[2664]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000501720 a1=c00059b2a8 a2=c000a99110 a3=25 items=0 ppid=1 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:23.767729 kernel: audit: type=1327 audit(1757120423.295:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:00:23.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:00:23.319000 audit[2664]: AVC avc: denied { mac_admin } for pid=2664 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:00:23.319000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 6 01:00:23.319000 audit[2664]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0012bd320 a1=c000f4b6e0 a2=c0012bd2f0 a3=25 items=0 ppid=1 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:23.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 6 01:00:24.292641 kubelet[2664]: I0906 01:00:24.292553 2664 apiserver.go:52] "Watching apiserver" Sep 6 01:00:24.297213 kubelet[2664]: I0906 01:00:24.297122 2664 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 01:00:24.317472 kubelet[2664]: W0906 01:00:24.317386 2664 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:00:24.317701 kubelet[2664]: E0906 01:00:24.317517 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:00:24.335898 kubelet[2664]: I0906 01:00:24.335849 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4cc2a8c2f2" podStartSLOduration=3.335821517 podStartE2EDuration="3.335821517s" podCreationTimestamp="2025-09-06 01:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:00:24.335753702 +0000 UTC m=+1.133490389" watchObservedRunningTime="2025-09-06 01:00:24.335821517 +0000 UTC m=+1.133558201" Sep 6 01:00:24.341318 kubelet[2664]: I0906 01:00:24.341280 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4cc2a8c2f2" podStartSLOduration=3.341264894 podStartE2EDuration="3.341264894s" podCreationTimestamp="2025-09-06 01:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:00:24.341101668 +0000 UTC m=+1.138838359" watchObservedRunningTime="2025-09-06 01:00:24.341264894 +0000 UTC m=+1.139001579" Sep 6 01:00:24.346723 kubelet[2664]: I0906 01:00:24.346652 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4cc2a8c2f2" podStartSLOduration=3.346638717 podStartE2EDuration="3.346638717s" podCreationTimestamp="2025-09-06 01:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:00:24.346600771 +0000 UTC m=+1.144337457" watchObservedRunningTime="2025-09-06 01:00:24.346638717 +0000 UTC m=+1.144375399" Sep 6 01:00:28.645199 kubelet[2664]: I0906 01:00:28.645118 2664 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 01:00:28.646508 env[1665]: time="2025-09-06T01:00:28.646006582Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 01:00:28.647497 kubelet[2664]: I0906 01:00:28.646537 2664 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 01:00:29.642413 kubelet[2664]: I0906 01:00:29.642368 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aedcc44-4a59-46ca-b8d2-3719278ff90f-xtables-lock\") pod \"kube-proxy-rbcbr\" (UID: \"0aedcc44-4a59-46ca-b8d2-3719278ff90f\") " pod="kube-system/kube-proxy-rbcbr" Sep 6 01:00:29.642573 kubelet[2664]: I0906 01:00:29.642424 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcpbc\" (UniqueName: \"kubernetes.io/projected/0aedcc44-4a59-46ca-b8d2-3719278ff90f-kube-api-access-lcpbc\") pod \"kube-proxy-rbcbr\" (UID: \"0aedcc44-4a59-46ca-b8d2-3719278ff90f\") " pod="kube-system/kube-proxy-rbcbr" Sep 6 01:00:29.642573 kubelet[2664]: I0906 01:00:29.642461 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0aedcc44-4a59-46ca-b8d2-3719278ff90f-kube-proxy\") pod \"kube-proxy-rbcbr\" (UID: \"0aedcc44-4a59-46ca-b8d2-3719278ff90f\") " pod="kube-system/kube-proxy-rbcbr" Sep 6 01:00:29.642573 kubelet[2664]: I0906 01:00:29.642489 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aedcc44-4a59-46ca-b8d2-3719278ff90f-lib-modules\") pod \"kube-proxy-rbcbr\" (UID: \"0aedcc44-4a59-46ca-b8d2-3719278ff90f\") " pod="kube-system/kube-proxy-rbcbr" Sep 6 01:00:29.743348 kubelet[2664]: I0906 01:00:29.743235 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nljgx\" (UniqueName: \"kubernetes.io/projected/255bc58f-d097-49d9-9e83-01c07119feb5-kube-api-access-nljgx\") pod \"tigera-operator-58fc44c59b-ktn8x\" (UID: \"255bc58f-d097-49d9-9e83-01c07119feb5\") " pod="tigera-operator/tigera-operator-58fc44c59b-ktn8x" Sep 6 01:00:29.744170 kubelet[2664]: I0906 01:00:29.743395 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/255bc58f-d097-49d9-9e83-01c07119feb5-var-lib-calico\") pod \"tigera-operator-58fc44c59b-ktn8x\" (UID: \"255bc58f-d097-49d9-9e83-01c07119feb5\") " pod="tigera-operator/tigera-operator-58fc44c59b-ktn8x" Sep 6 01:00:29.757742 kubelet[2664]: I0906 01:00:29.757642 2664 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 01:00:29.828445 update_engine[1655]: I0906 01:00:29.828380 1655 update_attempter.cc:509] Updating boot flags... Sep 6 01:00:29.905533 env[1665]: time="2025-09-06T01:00:29.905392184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rbcbr,Uid:0aedcc44-4a59-46ca-b8d2-3719278ff90f,Namespace:kube-system,Attempt:0,}" Sep 6 01:00:29.913729 env[1665]: time="2025-09-06T01:00:29.913675135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:00:29.913729 env[1665]: time="2025-09-06T01:00:29.913714997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:00:29.913869 env[1665]: time="2025-09-06T01:00:29.913731248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:00:29.913924 env[1665]: time="2025-09-06T01:00:29.913870297Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bae88482ba1a610c1e03a4798c3f20bb1087835e00740d63f22045cf8f481cf pid=2770 runtime=io.containerd.runc.v2 Sep 6 01:00:29.932629 env[1665]: time="2025-09-06T01:00:29.932597732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rbcbr,Uid:0aedcc44-4a59-46ca-b8d2-3719278ff90f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bae88482ba1a610c1e03a4798c3f20bb1087835e00740d63f22045cf8f481cf\"" Sep 6 01:00:29.933846 env[1665]: time="2025-09-06T01:00:29.933829098Z" level=info msg="CreateContainer within sandbox \"1bae88482ba1a610c1e03a4798c3f20bb1087835e00740d63f22045cf8f481cf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 01:00:29.941176 env[1665]: time="2025-09-06T01:00:29.941123838Z" level=info msg="CreateContainer within sandbox \"1bae88482ba1a610c1e03a4798c3f20bb1087835e00740d63f22045cf8f481cf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02f4f4635d214bacd8510f49732863a97f3bd69053ba13293c53049dcb1a53f6\"" Sep 6 01:00:29.941456 env[1665]: time="2025-09-06T01:00:29.941439244Z" level=info msg="StartContainer for \"02f4f4635d214bacd8510f49732863a97f3bd69053ba13293c53049dcb1a53f6\"" Sep 6 01:00:29.951352 env[1665]: time="2025-09-06T01:00:29.951328604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ktn8x,Uid:255bc58f-d097-49d9-9e83-01c07119feb5,Namespace:tigera-operator,Attempt:0,}" Sep 6 01:00:29.957094 env[1665]: time="2025-09-06T01:00:29.957033269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:00:29.957094 env[1665]: time="2025-09-06T01:00:29.957056611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:00:29.957094 env[1665]: time="2025-09-06T01:00:29.957063901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:00:29.957214 env[1665]: time="2025-09-06T01:00:29.957150254Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4279fe68207bc5a7b941a1efe42f4623a7c4fd1857772003beec462bc39779c pid=2832 runtime=io.containerd.runc.v2 Sep 6 01:00:29.964310 env[1665]: time="2025-09-06T01:00:29.964281520Z" level=info msg="StartContainer for \"02f4f4635d214bacd8510f49732863a97f3bd69053ba13293c53049dcb1a53f6\" returns successfully" Sep 6 01:00:29.984958 env[1665]: time="2025-09-06T01:00:29.984922311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ktn8x,Uid:255bc58f-d097-49d9-9e83-01c07119feb5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d4279fe68207bc5a7b941a1efe42f4623a7c4fd1857772003beec462bc39779c\"" Sep 6 01:00:29.985766 env[1665]: time="2025-09-06T01:00:29.985729015Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 6 01:00:30.117000 audit[2921]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.156456 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 6 01:00:30.156566 kernel: audit: type=1325 audit(1757120430.117:210): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.117000 audit[2921]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcfb9dc140 a2=0 a3=7ffcfb9dc12c items=0 ppid=2820 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.308192 kernel: audit: type=1300 audit(1757120430.117:210): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcfb9dc140 a2=0 a3=7ffcfb9dc12c items=0 ppid=2820 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.308227 kernel: audit: type=1327 audit(1757120430.117:210): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 01:00:30.117000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 01:00:30.328753 kubelet[2664]: I0906 01:00:30.328685 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rbcbr" podStartSLOduration=1.328674753 podStartE2EDuration="1.328674753s" podCreationTimestamp="2025-09-06 01:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:00:30.328664679 +0000 UTC m=+7.126401370" watchObservedRunningTime="2025-09-06 01:00:30.328674753 +0000 UTC m=+7.126411436" Sep 6 01:00:30.365829 kernel: audit: type=1325 audit(1757120430.117:211): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.117000 audit[2922]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.423308 kernel: audit: type=1300 audit(1757120430.117:211): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8280bdf0 a2=0 a3=7ffc8280bddc items=0 ppid=2820 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.117000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8280bdf0 a2=0 a3=7ffc8280bddc items=0 ppid=2820 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.518540 kernel: audit: type=1327 audit(1757120430.117:211): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 01:00:30.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 6 01:00:30.576131 kernel: audit: type=1325 audit(1757120430.120:212): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.120000 audit[2923]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.633430 kernel: audit: type=1300 audit(1757120430.120:212): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeedaedcf0 a2=0 a3=7ffeedaedcdc items=0 ppid=2820 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.120000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeedaedcf0 a2=0 a3=7ffeedaedcdc items=0 ppid=2820 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.729136 kernel: audit: type=1327 audit(1757120430.120:212): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 01:00:30.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 01:00:30.120000 audit[2924]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.844183 kernel: audit: type=1325 audit(1757120430.120:213): table=nat:41 family=10 entries=1 op=nft_register_chain pid=2924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.120000 audit[2924]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcaf4a9420 a2=0 a3=7ffcaf4a940c items=0 ppid=2820 pid=2924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 6 01:00:30.123000 audit[2925]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.123000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7d6c0a50 a2=0 a3=7fff7d6c0a3c items=0 ppid=2820 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 01:00:30.124000 audit[2926]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.124000 audit[2926]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe95fa76f0 a2=0 a3=7ffe95fa76dc items=0 ppid=2820 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.124000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 6 01:00:30.219000 audit[2927]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.219000 audit[2927]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe1e337c50 a2=0 a3=7ffe1e337c3c items=0 ppid=2820 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 01:00:30.220000 audit[2929]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.220000 audit[2929]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffca7b8b9a0 a2=0 a3=7ffca7b8b98c items=0 ppid=2820 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 6 01:00:30.222000 audit[2932]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.222000 audit[2932]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe828d3290 a2=0 a3=7ffe828d327c items=0 ppid=2820 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 6 01:00:30.223000 audit[2933]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.223000 audit[2933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea03288e0 a2=0 a3=7ffea03288cc items=0 ppid=2820 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 01:00:30.224000 audit[2935]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.224000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8d1a86b0 a2=0 a3=7ffe8d1a869c items=0 ppid=2820 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 01:00:30.225000 audit[2936]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.225000 audit[2936]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea03e3e10 a2=0 a3=7ffea03e3dfc items=0 ppid=2820 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 01:00:30.226000 audit[2938]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.226000 audit[2938]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffe79b2650 a2=0 a3=7fffe79b263c items=0 ppid=2820 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 01:00:30.228000 audit[2941]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.228000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc0750a8f0 a2=0 a3=7ffc0750a8dc items=0 ppid=2820 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 6 01:00:30.228000 audit[2942]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.228000 audit[2942]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8074c360 a2=0 a3=7ffc8074c34c items=0 ppid=2820 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 01:00:30.230000 audit[2944]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.230000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca250a340 a2=0 a3=7ffca250a32c items=0 ppid=2820 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 01:00:30.230000 audit[2945]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.230000 audit[2945]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0bf6cae0 a2=0 a3=7ffe0bf6cacc items=0 ppid=2820 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 01:00:30.365000 audit[2947]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.365000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec5f56bf0 a2=0 a3=7ffec5f56bdc items=0 ppid=2820 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 01:00:30.844000 audit[2950]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.844000 audit[2950]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdb9243be0 a2=0 a3=7ffdb9243bcc items=0 ppid=2820 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 01:00:30.846000 audit[2953]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.846000 audit[2953]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1d3b7370 a2=0 a3=7ffe1d3b735c items=0 ppid=2820 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 01:00:30.847000 audit[2954]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.847000 audit[2954]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff149f4c00 a2=0 a3=7fff149f4bec items=0 ppid=2820 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 01:00:30.848000 audit[2956]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.848000 audit[2956]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe42ba7000 a2=0 a3=7ffe42ba6fec items=0 ppid=2820 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 01:00:30.850000 audit[2959]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.850000 audit[2959]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffac10ada0 a2=0 a3=7fffac10ad8c items=0 ppid=2820 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 01:00:30.850000 audit[2960]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.850000 audit[2960]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce17d1190 a2=0 a3=7ffce17d117c items=0 ppid=2820 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 01:00:30.852000 audit[2962]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 6 01:00:30.852000 audit[2962]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffee0366990 a2=0 a3=7ffee036697c items=0 ppid=2820 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 01:00:30.867000 audit[2968]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2968 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:30.867000 audit[2968]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd4b460740 a2=0 a3=7ffd4b46072c items=0 ppid=2820 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.867000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:30.899000 audit[2968]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:30.899000 audit[2968]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd4b460740 a2=0 a3=7ffd4b46072c items=0 ppid=2820 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:30.900000 audit[2973]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.900000 audit[2973]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd54a6c400 a2=0 a3=7ffd54a6c3ec items=0 ppid=2820 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 6 01:00:30.901000 audit[2975]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.901000 audit[2975]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd540efd90 a2=0 a3=7ffd540efd7c items=0 ppid=2820 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 6 01:00:30.904000 audit[2978]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.904000 audit[2978]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe9618ac50 a2=0 a3=7ffe9618ac3c items=0 ppid=2820 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 6 01:00:30.905000 audit[2979]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.905000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce23a09c0 a2=0 a3=7ffce23a09ac items=0 ppid=2820 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 6 01:00:30.907000 audit[2981]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.907000 audit[2981]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe5e8ec570 a2=0 a3=7ffe5e8ec55c items=0 ppid=2820 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 6 01:00:30.908000 audit[2982]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.908000 audit[2982]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc68b8a740 a2=0 a3=7ffc68b8a72c items=0 ppid=2820 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 6 01:00:30.910000 audit[2984]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.910000 audit[2984]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffccb360ff0 a2=0 a3=7ffccb360fdc items=0 ppid=2820 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 6 01:00:30.913000 audit[2987]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.913000 audit[2987]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffed6c7e1b0 a2=0 a3=7ffed6c7e19c items=0 ppid=2820 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 6 01:00:30.914000 audit[2988]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.914000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc53b974e0 a2=0 a3=7ffc53b974cc items=0 ppid=2820 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 6 01:00:30.916000 audit[2990]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.916000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffece524630 a2=0 a3=7ffece52461c items=0 ppid=2820 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 6 01:00:30.917000 audit[2991]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.917000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe37942560 a2=0 a3=7ffe3794254c items=0 ppid=2820 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 6 01:00:30.920000 audit[2993]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.920000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4496a300 a2=0 a3=7ffe4496a2ec items=0 ppid=2820 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 6 01:00:30.924000 audit[2996]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.924000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff3d3e0570 a2=0 a3=7fff3d3e055c items=0 ppid=2820 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 6 01:00:30.928000 audit[2999]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.928000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcabef32d0 a2=0 a3=7ffcabef32bc items=0 ppid=2820 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 6 01:00:30.930000 audit[3000]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.930000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2f2f1760 a2=0 a3=7ffe2f2f174c items=0 ppid=2820 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 6 01:00:30.933000 audit[3002]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.933000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff232a7c00 a2=0 a3=7fff232a7bec items=0 ppid=2820 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 01:00:30.938000 audit[3005]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.938000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffde9ec94e0 a2=0 a3=7ffde9ec94cc items=0 ppid=2820 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 6 01:00:30.940000 audit[3006]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.940000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0f80e600 a2=0 a3=7fff0f80e5ec items=0 ppid=2820 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 6 01:00:30.943000 audit[3008]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.943000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc972898f0 a2=0 a3=7ffc972898dc items=0 ppid=2820 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 6 01:00:30.945000 audit[3009]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.945000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff11a09560 a2=0 a3=7fff11a0954c items=0 ppid=2820 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 6 01:00:30.948000 audit[3011]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.948000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe4f1158a0 a2=0 a3=7ffe4f11588c items=0 ppid=2820 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 01:00:30.953000 audit[3014]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 6 01:00:30.953000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc8bf6b600 a2=0 a3=7ffc8bf6b5ec items=0 ppid=2820 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 6 01:00:30.957000 audit[3016]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 01:00:30.957000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff163eb940 a2=0 a3=7fff163eb92c items=0 ppid=2820 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.957000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:30.958000 audit[3016]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 6 01:00:30.958000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff163eb940 a2=0 a3=7fff163eb92c items=0 ppid=2820 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:30.958000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:31.853382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505486305.mount: Deactivated successfully. Sep 6 01:00:32.347637 env[1665]: time="2025-09-06T01:00:32.347612704Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:32.348167 env[1665]: time="2025-09-06T01:00:32.348154949Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:32.348883 env[1665]: time="2025-09-06T01:00:32.348870725Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:32.349921 env[1665]: time="2025-09-06T01:00:32.349910492Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:32.350158 env[1665]: time="2025-09-06T01:00:32.350145905Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 6 01:00:32.351436 env[1665]: time="2025-09-06T01:00:32.351421798Z" level=info msg="CreateContainer within sandbox \"d4279fe68207bc5a7b941a1efe42f4623a7c4fd1857772003beec462bc39779c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 6 01:00:32.356019 env[1665]: time="2025-09-06T01:00:32.355950667Z" level=info msg="CreateContainer within sandbox \"d4279fe68207bc5a7b941a1efe42f4623a7c4fd1857772003beec462bc39779c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"48be16e9502eed1be30f56c78dfb59c38a6d58fc4ebcb58836d954aeef58e57d\"" Sep 6 01:00:32.356272 env[1665]: time="2025-09-06T01:00:32.356257589Z" level=info msg="StartContainer for \"48be16e9502eed1be30f56c78dfb59c38a6d58fc4ebcb58836d954aeef58e57d\"" Sep 6 01:00:32.398028 env[1665]: time="2025-09-06T01:00:32.398001473Z" level=info msg="StartContainer for \"48be16e9502eed1be30f56c78dfb59c38a6d58fc4ebcb58836d954aeef58e57d\" returns successfully" Sep 6 01:00:33.337605 kubelet[2664]: I0906 01:00:33.337573 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-ktn8x" podStartSLOduration=1.972310867 podStartE2EDuration="4.3375622s" podCreationTimestamp="2025-09-06 01:00:29 +0000 UTC" firstStartedPulling="2025-09-06 01:00:29.985510589 +0000 UTC m=+6.783247277" lastFinishedPulling="2025-09-06 01:00:32.350761925 +0000 UTC m=+9.148498610" observedRunningTime="2025-09-06 01:00:33.337493473 +0000 UTC m=+10.135230161" watchObservedRunningTime="2025-09-06 01:00:33.3375622 +0000 UTC m=+10.135298889" Sep 6 01:00:36.926695 sudo[1919]: pam_unix(sudo:session): session closed for user root Sep 6 01:00:36.925000 audit[1919]: USER_END pid=1919 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:36.927795 sshd[1914]: pam_unix(sshd:session): session closed for user core Sep 6 01:00:36.930626 systemd[1]: sshd@8-139.178.90.135:22-139.178.68.195:58994.service: Deactivated successfully. Sep 6 01:00:36.931552 systemd-logind[1704]: Session 11 logged out. Waiting for processes to exit. Sep 6 01:00:36.931566 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 01:00:36.932248 systemd-logind[1704]: Removed session 11. Sep 6 01:00:36.952980 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 6 01:00:36.953083 kernel: audit: type=1106 audit(1757120436.925:261): pid=1919 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:36.925000 audit[1919]: CRED_DISP pid=1919 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:37.126634 kernel: audit: type=1104 audit(1757120436.925:262): pid=1919 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 6 01:00:37.126735 kernel: audit: type=1106 audit(1757120436.928:263): pid=1914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:36.928000 audit[1914]: USER_END pid=1914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:36.928000 audit[1914]: CRED_DISP pid=1914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:37.310055 kernel: audit: type=1104 audit(1757120436.928:264): pid=1914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:00:37.310188 kernel: audit: type=1131 audit(1757120436.929:265): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.90.135:22-139.178.68.195:58994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:36.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.90.135:22-139.178.68.195:58994 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:00:37.207000 audit[3180]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:37.457352 kernel: audit: type=1325 audit(1757120437.207:266): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:37.457443 kernel: audit: type=1300 audit(1757120437.207:266): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe41933f20 a2=0 a3=7ffe41933f0c items=0 ppid=2820 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:37.207000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe41933f20 a2=0 a3=7ffe41933f0c items=0 ppid=2820 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:37.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:37.613137 kernel: audit: type=1327 audit(1757120437.207:266): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:37.613000 audit[3180]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:37.613000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe41933f20 a2=0 a3=0 items=0 ppid=2820 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:37.675492 kernel: audit: type=1325 audit(1757120437.613:267): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:37.675581 kernel: audit: type=1300 audit(1757120437.613:267): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe41933f20 a2=0 a3=0 items=0 ppid=2820 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:37.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:37.780000 audit[3183]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:37.780000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff770aad30 a2=0 a3=7fff770aad1c items=0 ppid=2820 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:37.780000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:37.792000 audit[3183]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:37.792000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff770aad30 a2=0 a3=0 items=0 ppid=2820 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:37.792000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:39.018000 audit[3185]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:39.018000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd88d52760 a2=0 a3=7ffd88d5274c items=0 ppid=2820 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:39.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:39.030000 audit[3185]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:39.030000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd88d52760 a2=0 a3=0 items=0 ppid=2820 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:39.030000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:39.299000 audit[3187]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:39.299000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc39d95bc0 a2=0 a3=7ffc39d95bac items=0 ppid=2820 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:39.299000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:39.310000 audit[3187]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:39.310000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc39d95bc0 a2=0 a3=0 items=0 ppid=2820 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:39.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:39.406612 kubelet[2664]: I0906 01:00:39.406508 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/33257f7e-36eb-463f-91c0-e095d0a17e2a-typha-certs\") pod \"calico-typha-7869f9999d-k8knn\" (UID: \"33257f7e-36eb-463f-91c0-e095d0a17e2a\") " pod="calico-system/calico-typha-7869f9999d-k8knn" Sep 6 01:00:39.406612 kubelet[2664]: I0906 01:00:39.406596 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4b9t\" (UniqueName: \"kubernetes.io/projected/33257f7e-36eb-463f-91c0-e095d0a17e2a-kube-api-access-w4b9t\") pod \"calico-typha-7869f9999d-k8knn\" (UID: \"33257f7e-36eb-463f-91c0-e095d0a17e2a\") " pod="calico-system/calico-typha-7869f9999d-k8knn" Sep 6 01:00:39.407612 kubelet[2664]: I0906 01:00:39.406659 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33257f7e-36eb-463f-91c0-e095d0a17e2a-tigera-ca-bundle\") pod \"calico-typha-7869f9999d-k8knn\" (UID: \"33257f7e-36eb-463f-91c0-e095d0a17e2a\") " pod="calico-system/calico-typha-7869f9999d-k8knn" Sep 6 01:00:39.630461 env[1665]: time="2025-09-06T01:00:39.630299575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7869f9999d-k8knn,Uid:33257f7e-36eb-463f-91c0-e095d0a17e2a,Namespace:calico-system,Attempt:0,}" Sep 6 01:00:39.647377 env[1665]: time="2025-09-06T01:00:39.647193598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:00:39.647377 env[1665]: time="2025-09-06T01:00:39.647271413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:00:39.647377 env[1665]: time="2025-09-06T01:00:39.647296814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:00:39.647717 env[1665]: time="2025-09-06T01:00:39.647612183Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bd3b64f5bcda008039ab4748ca8ba003c1caaf36172fd302d53c133ff1bf3c8 pid=3197 runtime=io.containerd.runc.v2 Sep 6 01:00:39.725396 env[1665]: time="2025-09-06T01:00:39.725369629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7869f9999d-k8knn,Uid:33257f7e-36eb-463f-91c0-e095d0a17e2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bd3b64f5bcda008039ab4748ca8ba003c1caaf36172fd302d53c133ff1bf3c8\"" Sep 6 01:00:39.726077 env[1665]: time="2025-09-06T01:00:39.726062400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 6 01:00:39.809098 kubelet[2664]: I0906 01:00:39.808992 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ea986a1b-2625-452b-a0e8-32ff5a4052ca-node-certs\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809098 kubelet[2664]: I0906 01:00:39.809083 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea986a1b-2625-452b-a0e8-32ff5a4052ca-lib-modules\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809497 kubelet[2664]: I0906 01:00:39.809134 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea986a1b-2625-452b-a0e8-32ff5a4052ca-tigera-ca-bundle\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809497 kubelet[2664]: I0906 01:00:39.809184 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ea986a1b-2625-452b-a0e8-32ff5a4052ca-cni-log-dir\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809497 kubelet[2664]: I0906 01:00:39.809234 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ea986a1b-2625-452b-a0e8-32ff5a4052ca-policysync\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809497 kubelet[2664]: I0906 01:00:39.809280 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea986a1b-2625-452b-a0e8-32ff5a4052ca-var-lib-calico\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809497 kubelet[2664]: I0906 01:00:39.809323 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ea986a1b-2625-452b-a0e8-32ff5a4052ca-var-run-calico\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809969 kubelet[2664]: I0906 01:00:39.809369 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea986a1b-2625-452b-a0e8-32ff5a4052ca-xtables-lock\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809969 kubelet[2664]: I0906 01:00:39.809414 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ea986a1b-2625-452b-a0e8-32ff5a4052ca-cni-net-dir\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809969 kubelet[2664]: I0906 01:00:39.809483 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stq2d\" (UniqueName: \"kubernetes.io/projected/ea986a1b-2625-452b-a0e8-32ff5a4052ca-kube-api-access-stq2d\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809969 kubelet[2664]: I0906 01:00:39.809531 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ea986a1b-2625-452b-a0e8-32ff5a4052ca-flexvol-driver-host\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.809969 kubelet[2664]: I0906 01:00:39.809575 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ea986a1b-2625-452b-a0e8-32ff5a4052ca-cni-bin-dir\") pod \"calico-node-rv8rk\" (UID: \"ea986a1b-2625-452b-a0e8-32ff5a4052ca\") " pod="calico-system/calico-node-rv8rk" Sep 6 01:00:39.912925 kubelet[2664]: E0906 01:00:39.912880 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:39.912925 kubelet[2664]: W0906 01:00:39.912921 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:39.913294 kubelet[2664]: E0906 01:00:39.912963 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:39.918577 kubelet[2664]: E0906 01:00:39.918522 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:39.918577 kubelet[2664]: W0906 01:00:39.918560 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:39.919001 kubelet[2664]: E0906 01:00:39.918595 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:39.930209 kubelet[2664]: E0906 01:00:39.930145 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:39.930209 kubelet[2664]: W0906 01:00:39.930184 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:39.930209 kubelet[2664]: E0906 01:00:39.930219 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.008987 kubelet[2664]: E0906 01:00:40.008862 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:40.009949 env[1665]: time="2025-09-06T01:00:40.009854995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rv8rk,Uid:ea986a1b-2625-452b-a0e8-32ff5a4052ca,Namespace:calico-system,Attempt:0,}" Sep 6 01:00:40.029068 env[1665]: time="2025-09-06T01:00:40.028996498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:00:40.029068 env[1665]: time="2025-09-06T01:00:40.029032717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:00:40.029068 env[1665]: time="2025-09-06T01:00:40.029045628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:00:40.029259 env[1665]: time="2025-09-06T01:00:40.029158872Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/323581c8b648002f87b1d3bda4898ad0ed82ad7b86dcdd2d2577c161b0e83ff2 pid=3250 runtime=io.containerd.runc.v2 Sep 6 01:00:40.054843 env[1665]: time="2025-09-06T01:00:40.054793934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rv8rk,Uid:ea986a1b-2625-452b-a0e8-32ff5a4052ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"323581c8b648002f87b1d3bda4898ad0ed82ad7b86dcdd2d2577c161b0e83ff2\"" Sep 6 01:00:40.103665 kubelet[2664]: E0906 01:00:40.103587 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.103665 kubelet[2664]: W0906 01:00:40.103644 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.104102 kubelet[2664]: E0906 01:00:40.103708 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.104405 kubelet[2664]: E0906 01:00:40.104369 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.104631 kubelet[2664]: W0906 01:00:40.104404 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.104631 kubelet[2664]: E0906 01:00:40.104472 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.104996 kubelet[2664]: E0906 01:00:40.104959 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.104996 kubelet[2664]: W0906 01:00:40.104994 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.105251 kubelet[2664]: E0906 01:00:40.105029 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.105566 kubelet[2664]: E0906 01:00:40.105538 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.105566 kubelet[2664]: W0906 01:00:40.105566 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.105839 kubelet[2664]: E0906 01:00:40.105595 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.106135 kubelet[2664]: E0906 01:00:40.106103 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.106262 kubelet[2664]: W0906 01:00:40.106139 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.106262 kubelet[2664]: E0906 01:00:40.106172 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.106677 kubelet[2664]: E0906 01:00:40.106650 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.106820 kubelet[2664]: W0906 01:00:40.106678 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.106820 kubelet[2664]: E0906 01:00:40.106711 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.107144 kubelet[2664]: E0906 01:00:40.107120 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.107265 kubelet[2664]: W0906 01:00:40.107144 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.107265 kubelet[2664]: E0906 01:00:40.107169 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.107661 kubelet[2664]: E0906 01:00:40.107627 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.107661 kubelet[2664]: W0906 01:00:40.107656 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.107907 kubelet[2664]: E0906 01:00:40.107683 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.108094 kubelet[2664]: E0906 01:00:40.108071 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.108196 kubelet[2664]: W0906 01:00:40.108094 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.108196 kubelet[2664]: E0906 01:00:40.108117 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.108602 kubelet[2664]: E0906 01:00:40.108573 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.108716 kubelet[2664]: W0906 01:00:40.108601 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.108716 kubelet[2664]: E0906 01:00:40.108627 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.109065 kubelet[2664]: E0906 01:00:40.109038 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.109169 kubelet[2664]: W0906 01:00:40.109066 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.109169 kubelet[2664]: E0906 01:00:40.109092 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.109525 kubelet[2664]: E0906 01:00:40.109495 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.109525 kubelet[2664]: W0906 01:00:40.109520 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.109787 kubelet[2664]: E0906 01:00:40.109543 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.109973 kubelet[2664]: E0906 01:00:40.109945 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.110079 kubelet[2664]: W0906 01:00:40.109973 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.110079 kubelet[2664]: E0906 01:00:40.110001 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.110501 kubelet[2664]: E0906 01:00:40.110466 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.110501 kubelet[2664]: W0906 01:00:40.110494 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.110758 kubelet[2664]: E0906 01:00:40.110518 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.110942 kubelet[2664]: E0906 01:00:40.110912 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.110942 kubelet[2664]: W0906 01:00:40.110939 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.111155 kubelet[2664]: E0906 01:00:40.110965 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.111460 kubelet[2664]: E0906 01:00:40.111403 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.111578 kubelet[2664]: W0906 01:00:40.111458 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.111578 kubelet[2664]: E0906 01:00:40.111491 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.111977 kubelet[2664]: E0906 01:00:40.111936 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.111977 kubelet[2664]: W0906 01:00:40.111967 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.112279 kubelet[2664]: E0906 01:00:40.111992 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.112438 kubelet[2664]: E0906 01:00:40.112400 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.112572 kubelet[2664]: W0906 01:00:40.112450 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.112572 kubelet[2664]: E0906 01:00:40.112476 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.112900 kubelet[2664]: E0906 01:00:40.112873 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.113011 kubelet[2664]: W0906 01:00:40.112899 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.113011 kubelet[2664]: E0906 01:00:40.112925 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.113392 kubelet[2664]: E0906 01:00:40.113366 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.113528 kubelet[2664]: W0906 01:00:40.113393 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.113528 kubelet[2664]: E0906 01:00:40.113417 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.114040 kubelet[2664]: E0906 01:00:40.114015 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.114162 kubelet[2664]: W0906 01:00:40.114040 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.114162 kubelet[2664]: E0906 01:00:40.114065 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.114162 kubelet[2664]: I0906 01:00:40.114119 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/993f1b5f-0640-48b4-ae84-9939b8fffa94-socket-dir\") pod \"csi-node-driver-twvsq\" (UID: \"993f1b5f-0640-48b4-ae84-9939b8fffa94\") " pod="calico-system/csi-node-driver-twvsq" Sep 6 01:00:40.114634 kubelet[2664]: E0906 01:00:40.114604 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.114634 kubelet[2664]: W0906 01:00:40.114633 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.114895 kubelet[2664]: E0906 01:00:40.114665 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.114895 kubelet[2664]: I0906 01:00:40.114705 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/993f1b5f-0640-48b4-ae84-9939b8fffa94-varrun\") pod \"csi-node-driver-twvsq\" (UID: \"993f1b5f-0640-48b4-ae84-9939b8fffa94\") " pod="calico-system/csi-node-driver-twvsq" Sep 6 01:00:40.115317 kubelet[2664]: E0906 01:00:40.115272 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.115483 kubelet[2664]: W0906 01:00:40.115321 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.115483 kubelet[2664]: E0906 01:00:40.115372 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.115874 kubelet[2664]: E0906 01:00:40.115840 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.115874 kubelet[2664]: W0906 01:00:40.115869 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.116094 kubelet[2664]: E0906 01:00:40.115905 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.116343 kubelet[2664]: E0906 01:00:40.116318 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.116484 kubelet[2664]: W0906 01:00:40.116344 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.116484 kubelet[2664]: E0906 01:00:40.116378 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.116484 kubelet[2664]: I0906 01:00:40.116463 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/993f1b5f-0640-48b4-ae84-9939b8fffa94-registration-dir\") pod \"csi-node-driver-twvsq\" (UID: \"993f1b5f-0640-48b4-ae84-9939b8fffa94\") " pod="calico-system/csi-node-driver-twvsq" Sep 6 01:00:40.117117 kubelet[2664]: E0906 01:00:40.117063 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.117297 kubelet[2664]: W0906 01:00:40.117116 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.117297 kubelet[2664]: E0906 01:00:40.117177 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.117744 kubelet[2664]: E0906 01:00:40.117682 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.117744 kubelet[2664]: W0906 01:00:40.117724 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.117999 kubelet[2664]: E0906 01:00:40.117777 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.118375 kubelet[2664]: E0906 01:00:40.118322 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.118375 kubelet[2664]: W0906 01:00:40.118365 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.118634 kubelet[2664]: E0906 01:00:40.118432 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.118634 kubelet[2664]: I0906 01:00:40.118513 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hw9c\" (UniqueName: \"kubernetes.io/projected/993f1b5f-0640-48b4-ae84-9939b8fffa94-kube-api-access-8hw9c\") pod \"csi-node-driver-twvsq\" (UID: \"993f1b5f-0640-48b4-ae84-9939b8fffa94\") " pod="calico-system/csi-node-driver-twvsq" Sep 6 01:00:40.119041 kubelet[2664]: E0906 01:00:40.118986 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.119041 kubelet[2664]: W0906 01:00:40.119032 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.119252 kubelet[2664]: E0906 01:00:40.119074 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.119550 kubelet[2664]: E0906 01:00:40.119506 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.119550 kubelet[2664]: W0906 01:00:40.119530 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.119773 kubelet[2664]: E0906 01:00:40.119560 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.120048 kubelet[2664]: E0906 01:00:40.120015 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.120048 kubelet[2664]: W0906 01:00:40.120042 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.120331 kubelet[2664]: E0906 01:00:40.120076 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.120331 kubelet[2664]: I0906 01:00:40.120125 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/993f1b5f-0640-48b4-ae84-9939b8fffa94-kubelet-dir\") pod \"csi-node-driver-twvsq\" (UID: \"993f1b5f-0640-48b4-ae84-9939b8fffa94\") " pod="calico-system/csi-node-driver-twvsq" Sep 6 01:00:40.120914 kubelet[2664]: E0906 01:00:40.120842 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.120914 kubelet[2664]: W0906 01:00:40.120896 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.121209 kubelet[2664]: E0906 01:00:40.120961 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.121552 kubelet[2664]: E0906 01:00:40.121516 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.121731 kubelet[2664]: W0906 01:00:40.121555 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.121731 kubelet[2664]: E0906 01:00:40.121606 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.122204 kubelet[2664]: E0906 01:00:40.122162 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.122321 kubelet[2664]: W0906 01:00:40.122213 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.122321 kubelet[2664]: E0906 01:00:40.122266 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.122852 kubelet[2664]: E0906 01:00:40.122820 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.122972 kubelet[2664]: W0906 01:00:40.122861 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.122972 kubelet[2664]: E0906 01:00:40.122907 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.221908 kubelet[2664]: E0906 01:00:40.221759 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.221908 kubelet[2664]: W0906 01:00:40.221804 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.221908 kubelet[2664]: E0906 01:00:40.221847 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.222533 kubelet[2664]: E0906 01:00:40.222499 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.222688 kubelet[2664]: W0906 01:00:40.222540 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.222688 kubelet[2664]: E0906 01:00:40.222590 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.223174 kubelet[2664]: E0906 01:00:40.223132 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.223325 kubelet[2664]: W0906 01:00:40.223175 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.223325 kubelet[2664]: E0906 01:00:40.223225 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.223805 kubelet[2664]: E0906 01:00:40.223747 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.223805 kubelet[2664]: W0906 01:00:40.223777 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.224056 kubelet[2664]: E0906 01:00:40.223812 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.224292 kubelet[2664]: E0906 01:00:40.224265 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.224403 kubelet[2664]: W0906 01:00:40.224297 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.224537 kubelet[2664]: E0906 01:00:40.224451 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.224872 kubelet[2664]: E0906 01:00:40.224840 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.224985 kubelet[2664]: W0906 01:00:40.224881 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.225087 kubelet[2664]: E0906 01:00:40.225009 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.225477 kubelet[2664]: E0906 01:00:40.225388 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.225477 kubelet[2664]: W0906 01:00:40.225464 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.225852 kubelet[2664]: E0906 01:00:40.225532 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.226882 kubelet[2664]: E0906 01:00:40.226836 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.226882 kubelet[2664]: W0906 01:00:40.226868 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.227285 kubelet[2664]: E0906 01:00:40.226905 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.227458 kubelet[2664]: E0906 01:00:40.227365 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.227458 kubelet[2664]: W0906 01:00:40.227397 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.227458 kubelet[2664]: E0906 01:00:40.227452 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.227940 kubelet[2664]: E0906 01:00:40.227891 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.227940 kubelet[2664]: W0906 01:00:40.227917 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.228153 kubelet[2664]: E0906 01:00:40.227982 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.228308 kubelet[2664]: E0906 01:00:40.228285 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.228416 kubelet[2664]: W0906 01:00:40.228310 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.228416 kubelet[2664]: E0906 01:00:40.228371 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.228871 kubelet[2664]: E0906 01:00:40.228791 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.228871 kubelet[2664]: W0906 01:00:40.228826 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.229146 kubelet[2664]: E0906 01:00:40.228908 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.229312 kubelet[2664]: E0906 01:00:40.229275 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.229457 kubelet[2664]: W0906 01:00:40.229312 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.229457 kubelet[2664]: E0906 01:00:40.229388 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.229859 kubelet[2664]: E0906 01:00:40.229819 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.229859 kubelet[2664]: W0906 01:00:40.229856 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.230239 kubelet[2664]: E0906 01:00:40.229939 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.230511 kubelet[2664]: E0906 01:00:40.230271 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.230511 kubelet[2664]: W0906 01:00:40.230294 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.230511 kubelet[2664]: E0906 01:00:40.230351 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.230982 kubelet[2664]: E0906 01:00:40.230723 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.230982 kubelet[2664]: W0906 01:00:40.230747 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.230982 kubelet[2664]: E0906 01:00:40.230818 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.231327 kubelet[2664]: E0906 01:00:40.231148 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.231327 kubelet[2664]: W0906 01:00:40.231173 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.231327 kubelet[2664]: E0906 01:00:40.231239 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.231664 kubelet[2664]: E0906 01:00:40.231615 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.231664 kubelet[2664]: W0906 01:00:40.231639 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.231898 kubelet[2664]: E0906 01:00:40.231706 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.232049 kubelet[2664]: E0906 01:00:40.232023 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.232220 kubelet[2664]: W0906 01:00:40.232047 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.232220 kubelet[2664]: E0906 01:00:40.232112 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.232484 kubelet[2664]: E0906 01:00:40.232459 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.232659 kubelet[2664]: W0906 01:00:40.232484 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.232659 kubelet[2664]: E0906 01:00:40.232550 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.232892 kubelet[2664]: E0906 01:00:40.232853 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.232892 kubelet[2664]: W0906 01:00:40.232873 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.233173 kubelet[2664]: E0906 01:00:40.232936 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.233280 kubelet[2664]: E0906 01:00:40.233263 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.233409 kubelet[2664]: W0906 01:00:40.233284 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.233409 kubelet[2664]: E0906 01:00:40.233348 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.233858 kubelet[2664]: E0906 01:00:40.233828 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.233858 kubelet[2664]: W0906 01:00:40.233855 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.234196 kubelet[2664]: E0906 01:00:40.233925 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.234381 kubelet[2664]: E0906 01:00:40.234284 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.234381 kubelet[2664]: W0906 01:00:40.234305 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.234381 kubelet[2664]: E0906 01:00:40.234336 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.234861 kubelet[2664]: E0906 01:00:40.234828 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.234861 kubelet[2664]: W0906 01:00:40.234855 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.235202 kubelet[2664]: E0906 01:00:40.234883 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.255007 kubelet[2664]: E0906 01:00:40.254924 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:40.255007 kubelet[2664]: W0906 01:00:40.254961 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:40.255007 kubelet[2664]: E0906 01:00:40.254997 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:40.333000 audit[3350]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:40.333000 audit[3350]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdba80dca0 a2=0 a3=7ffdba80dc8c items=0 ppid=2820 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:40.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:40.353000 audit[3350]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:40.353000 audit[3350]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdba80dca0 a2=0 a3=0 items=0 ppid=2820 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:40.353000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:41.304671 kubelet[2664]: E0906 01:00:41.304591 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:43.305027 kubelet[2664]: E0906 01:00:43.304913 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:45.304880 kubelet[2664]: E0906 01:00:45.304761 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:47.304259 kubelet[2664]: E0906 01:00:47.304204 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:49.305038 kubelet[2664]: E0906 01:00:49.304953 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:51.304760 kubelet[2664]: E0906 01:00:51.304693 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:53.305314 kubelet[2664]: E0906 01:00:53.305226 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:54.592907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269692064.mount: Deactivated successfully. Sep 6 01:00:55.100791 env[1665]: time="2025-09-06T01:00:55.100741137Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:55.101297 env[1665]: time="2025-09-06T01:00:55.101283007Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:55.101911 env[1665]: time="2025-09-06T01:00:55.101900671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:55.102557 env[1665]: time="2025-09-06T01:00:55.102545346Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:55.102845 env[1665]: time="2025-09-06T01:00:55.102829533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 6 01:00:55.103513 env[1665]: time="2025-09-06T01:00:55.103500730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 6 01:00:55.106979 env[1665]: time="2025-09-06T01:00:55.106933867Z" level=info msg="CreateContainer within sandbox \"4bd3b64f5bcda008039ab4748ca8ba003c1caaf36172fd302d53c133ff1bf3c8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 6 01:00:55.111218 env[1665]: time="2025-09-06T01:00:55.111173342Z" level=info msg="CreateContainer within sandbox \"4bd3b64f5bcda008039ab4748ca8ba003c1caaf36172fd302d53c133ff1bf3c8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eff6d2d9af07242cf42bfbfd13559410cd116171c83400b5a29e6e1c7b1a6ff0\"" Sep 6 01:00:55.111473 env[1665]: time="2025-09-06T01:00:55.111456059Z" level=info msg="StartContainer for \"eff6d2d9af07242cf42bfbfd13559410cd116171c83400b5a29e6e1c7b1a6ff0\"" Sep 6 01:00:55.144261 env[1665]: time="2025-09-06T01:00:55.144236102Z" level=info msg="StartContainer for \"eff6d2d9af07242cf42bfbfd13559410cd116171c83400b5a29e6e1c7b1a6ff0\" returns successfully" Sep 6 01:00:55.304970 kubelet[2664]: E0906 01:00:55.304880 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:55.405095 kubelet[2664]: I0906 01:00:55.404852 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7869f9999d-k8knn" podStartSLOduration=1.027299107 podStartE2EDuration="16.404808106s" podCreationTimestamp="2025-09-06 01:00:39 +0000 UTC" firstStartedPulling="2025-09-06 01:00:39.725913398 +0000 UTC m=+16.523650081" lastFinishedPulling="2025-09-06 01:00:55.103422393 +0000 UTC m=+31.901159080" observedRunningTime="2025-09-06 01:00:55.40479986 +0000 UTC m=+32.202536605" watchObservedRunningTime="2025-09-06 01:00:55.404808106 +0000 UTC m=+32.202544837" Sep 6 01:00:55.427584 kubelet[2664]: E0906 01:00:55.427494 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.427584 kubelet[2664]: W0906 01:00:55.427541 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.427584 kubelet[2664]: E0906 01:00:55.427591 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.428249 kubelet[2664]: E0906 01:00:55.428176 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.428249 kubelet[2664]: W0906 01:00:55.428211 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.428249 kubelet[2664]: E0906 01:00:55.428243 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.428871 kubelet[2664]: E0906 01:00:55.428788 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.428871 kubelet[2664]: W0906 01:00:55.428833 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.428871 kubelet[2664]: E0906 01:00:55.428868 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.429384 kubelet[2664]: E0906 01:00:55.429356 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.429532 kubelet[2664]: W0906 01:00:55.429383 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.429532 kubelet[2664]: E0906 01:00:55.429412 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.430024 kubelet[2664]: E0906 01:00:55.429948 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.430024 kubelet[2664]: W0906 01:00:55.429981 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.430024 kubelet[2664]: E0906 01:00:55.430017 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.430493 kubelet[2664]: E0906 01:00:55.430453 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.430493 kubelet[2664]: W0906 01:00:55.430479 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.430713 kubelet[2664]: E0906 01:00:55.430505 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.430976 kubelet[2664]: E0906 01:00:55.430904 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.430976 kubelet[2664]: W0906 01:00:55.430935 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.430976 kubelet[2664]: E0906 01:00:55.430968 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.431486 kubelet[2664]: E0906 01:00:55.431415 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.431486 kubelet[2664]: W0906 01:00:55.431474 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.431706 kubelet[2664]: E0906 01:00:55.431501 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.431979 kubelet[2664]: E0906 01:00:55.431954 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.432083 kubelet[2664]: W0906 01:00:55.431979 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.432083 kubelet[2664]: E0906 01:00:55.432005 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.432432 kubelet[2664]: E0906 01:00:55.432396 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.432544 kubelet[2664]: W0906 01:00:55.432442 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.432544 kubelet[2664]: E0906 01:00:55.432468 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.432898 kubelet[2664]: E0906 01:00:55.432846 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.432898 kubelet[2664]: W0906 01:00:55.432872 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.432898 kubelet[2664]: E0906 01:00:55.432896 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.433289 kubelet[2664]: E0906 01:00:55.433265 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.433406 kubelet[2664]: W0906 01:00:55.433290 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.433406 kubelet[2664]: E0906 01:00:55.433315 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.433799 kubelet[2664]: E0906 01:00:55.433773 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.433912 kubelet[2664]: W0906 01:00:55.433798 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.433912 kubelet[2664]: E0906 01:00:55.433823 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.434302 kubelet[2664]: E0906 01:00:55.434252 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.434302 kubelet[2664]: W0906 01:00:55.434276 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.434302 kubelet[2664]: E0906 01:00:55.434301 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.434730 kubelet[2664]: E0906 01:00:55.434679 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.434730 kubelet[2664]: W0906 01:00:55.434703 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.434730 kubelet[2664]: E0906 01:00:55.434731 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.451364 kubelet[2664]: E0906 01:00:55.451283 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.451364 kubelet[2664]: W0906 01:00:55.451318 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.451364 kubelet[2664]: E0906 01:00:55.451352 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.452014 kubelet[2664]: E0906 01:00:55.451936 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.452014 kubelet[2664]: W0906 01:00:55.451971 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.452014 kubelet[2664]: E0906 01:00:55.452010 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.452630 kubelet[2664]: E0906 01:00:55.452556 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.452630 kubelet[2664]: W0906 01:00:55.452593 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.452630 kubelet[2664]: E0906 01:00:55.452631 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.453157 kubelet[2664]: E0906 01:00:55.453077 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.453157 kubelet[2664]: W0906 01:00:55.453100 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.453157 kubelet[2664]: E0906 01:00:55.453129 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.453661 kubelet[2664]: E0906 01:00:55.453527 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.453661 kubelet[2664]: W0906 01:00:55.453548 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.453852 kubelet[2664]: E0906 01:00:55.453651 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.454076 kubelet[2664]: E0906 01:00:55.454007 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.454076 kubelet[2664]: W0906 01:00:55.454033 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.454376 kubelet[2664]: E0906 01:00:55.454152 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.454516 kubelet[2664]: E0906 01:00:55.454488 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.454516 kubelet[2664]: W0906 01:00:55.454511 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.454699 kubelet[2664]: E0906 01:00:55.454626 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.455057 kubelet[2664]: E0906 01:00:55.454985 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.455057 kubelet[2664]: W0906 01:00:55.455009 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.455057 kubelet[2664]: E0906 01:00:55.455040 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.455702 kubelet[2664]: E0906 01:00:55.455645 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.455702 kubelet[2664]: W0906 01:00:55.455676 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.455922 kubelet[2664]: E0906 01:00:55.455716 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.456249 kubelet[2664]: E0906 01:00:55.456191 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.456249 kubelet[2664]: W0906 01:00:55.456226 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.456528 kubelet[2664]: E0906 01:00:55.456314 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.456821 kubelet[2664]: E0906 01:00:55.456762 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.456821 kubelet[2664]: W0906 01:00:55.456796 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.457063 kubelet[2664]: E0906 01:00:55.456884 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.457372 kubelet[2664]: E0906 01:00:55.457340 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.457541 kubelet[2664]: W0906 01:00:55.457375 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.457541 kubelet[2664]: E0906 01:00:55.457476 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.457967 kubelet[2664]: E0906 01:00:55.457914 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.457967 kubelet[2664]: W0906 01:00:55.457941 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.458212 kubelet[2664]: E0906 01:00:55.458010 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.458504 kubelet[2664]: E0906 01:00:55.458448 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.458504 kubelet[2664]: W0906 01:00:55.458478 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.458761 kubelet[2664]: E0906 01:00:55.458517 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.459104 kubelet[2664]: E0906 01:00:55.459052 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.459104 kubelet[2664]: W0906 01:00:55.459078 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.459336 kubelet[2664]: E0906 01:00:55.459116 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.459767 kubelet[2664]: E0906 01:00:55.459710 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.459767 kubelet[2664]: W0906 01:00:55.459737 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.460034 kubelet[2664]: E0906 01:00:55.459777 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.460336 kubelet[2664]: E0906 01:00:55.460307 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.460457 kubelet[2664]: W0906 01:00:55.460336 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.460457 kubelet[2664]: E0906 01:00:55.460374 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:55.460930 kubelet[2664]: E0906 01:00:55.460878 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:55.460930 kubelet[2664]: W0906 01:00:55.460904 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:55.460930 kubelet[2664]: E0906 01:00:55.460930 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.386565 kubelet[2664]: I0906 01:00:56.386467 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:00:56.443326 kubelet[2664]: E0906 01:00:56.443241 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.443326 kubelet[2664]: W0906 01:00:56.443279 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.443326 kubelet[2664]: E0906 01:00:56.443316 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.443999 kubelet[2664]: E0906 01:00:56.443928 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.443999 kubelet[2664]: W0906 01:00:56.443962 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.443999 kubelet[2664]: E0906 01:00:56.443994 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.444561 kubelet[2664]: E0906 01:00:56.444495 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.444561 kubelet[2664]: W0906 01:00:56.444523 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.444561 kubelet[2664]: E0906 01:00:56.444549 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.445084 kubelet[2664]: E0906 01:00:56.445008 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.445084 kubelet[2664]: W0906 01:00:56.445035 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.445084 kubelet[2664]: E0906 01:00:56.445061 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.445510 kubelet[2664]: E0906 01:00:56.445485 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.445510 kubelet[2664]: W0906 01:00:56.445511 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.445736 kubelet[2664]: E0906 01:00:56.445534 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.446025 kubelet[2664]: E0906 01:00:56.445989 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.446179 kubelet[2664]: W0906 01:00:56.446026 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.446179 kubelet[2664]: E0906 01:00:56.446065 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.446516 kubelet[2664]: E0906 01:00:56.446490 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.446516 kubelet[2664]: W0906 01:00:56.446514 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.446717 kubelet[2664]: E0906 01:00:56.446538 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.447005 kubelet[2664]: E0906 01:00:56.446980 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.447114 kubelet[2664]: W0906 01:00:56.447006 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.447114 kubelet[2664]: E0906 01:00:56.447034 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.447487 kubelet[2664]: E0906 01:00:56.447461 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.447487 kubelet[2664]: W0906 01:00:56.447485 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.447711 kubelet[2664]: E0906 01:00:56.447508 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.447959 kubelet[2664]: E0906 01:00:56.447934 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.448069 kubelet[2664]: W0906 01:00:56.447958 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.448069 kubelet[2664]: E0906 01:00:56.447984 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.448472 kubelet[2664]: E0906 01:00:56.448411 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.448472 kubelet[2664]: W0906 01:00:56.448461 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.448686 kubelet[2664]: E0906 01:00:56.448488 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.448920 kubelet[2664]: E0906 01:00:56.448871 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.448920 kubelet[2664]: W0906 01:00:56.448895 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.448920 kubelet[2664]: E0906 01:00:56.448920 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.449375 kubelet[2664]: E0906 01:00:56.449349 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.449504 kubelet[2664]: W0906 01:00:56.449375 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.449504 kubelet[2664]: E0906 01:00:56.449400 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.449902 kubelet[2664]: E0906 01:00:56.449851 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.449902 kubelet[2664]: W0906 01:00:56.449875 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.449902 kubelet[2664]: E0906 01:00:56.449899 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.450355 kubelet[2664]: E0906 01:00:56.450329 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.450478 kubelet[2664]: W0906 01:00:56.450354 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.450478 kubelet[2664]: E0906 01:00:56.450379 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.461011 kubelet[2664]: E0906 01:00:56.460922 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.461011 kubelet[2664]: W0906 01:00:56.460960 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.461011 kubelet[2664]: E0906 01:00:56.460993 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.461604 kubelet[2664]: E0906 01:00:56.461522 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.461604 kubelet[2664]: W0906 01:00:56.461549 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.461604 kubelet[2664]: E0906 01:00:56.461583 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.462249 kubelet[2664]: E0906 01:00:56.462159 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.462249 kubelet[2664]: W0906 01:00:56.462197 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.462249 kubelet[2664]: E0906 01:00:56.462236 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.462806 kubelet[2664]: E0906 01:00:56.462718 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.462806 kubelet[2664]: W0906 01:00:56.462752 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.462806 kubelet[2664]: E0906 01:00:56.462791 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.463342 kubelet[2664]: E0906 01:00:56.463290 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.463342 kubelet[2664]: W0906 01:00:56.463324 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.463606 kubelet[2664]: E0906 01:00:56.463414 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.463918 kubelet[2664]: E0906 01:00:56.463847 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.463918 kubelet[2664]: W0906 01:00:56.463879 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.464194 kubelet[2664]: E0906 01:00:56.463959 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.464453 kubelet[2664]: E0906 01:00:56.464399 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.464577 kubelet[2664]: W0906 01:00:56.464450 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.464577 kubelet[2664]: E0906 01:00:56.464524 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.464975 kubelet[2664]: E0906 01:00:56.464952 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.465075 kubelet[2664]: W0906 01:00:56.464975 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.465075 kubelet[2664]: E0906 01:00:56.465009 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.465666 kubelet[2664]: E0906 01:00:56.465613 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.465666 kubelet[2664]: W0906 01:00:56.465648 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.465881 kubelet[2664]: E0906 01:00:56.465689 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.466206 kubelet[2664]: E0906 01:00:56.466159 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.466206 kubelet[2664]: W0906 01:00:56.466186 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.466454 kubelet[2664]: E0906 01:00:56.466262 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.466652 kubelet[2664]: E0906 01:00:56.466605 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.466652 kubelet[2664]: W0906 01:00:56.466631 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.466864 kubelet[2664]: E0906 01:00:56.466701 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.467127 kubelet[2664]: E0906 01:00:56.467082 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.467127 kubelet[2664]: W0906 01:00:56.467108 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.467334 kubelet[2664]: E0906 01:00:56.467151 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.467531 kubelet[2664]: E0906 01:00:56.467478 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.467531 kubelet[2664]: W0906 01:00:56.467503 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.467746 kubelet[2664]: E0906 01:00:56.467535 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.468024 kubelet[2664]: E0906 01:00:56.467980 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.468024 kubelet[2664]: W0906 01:00:56.468009 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.468225 kubelet[2664]: E0906 01:00:56.468042 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.468581 kubelet[2664]: E0906 01:00:56.468529 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.468581 kubelet[2664]: W0906 01:00:56.468564 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.468795 kubelet[2664]: E0906 01:00:56.468597 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.469153 kubelet[2664]: E0906 01:00:56.469128 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.469257 kubelet[2664]: W0906 01:00:56.469155 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.469257 kubelet[2664]: E0906 01:00:56.469190 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.469760 kubelet[2664]: E0906 01:00:56.469732 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.469877 kubelet[2664]: W0906 01:00:56.469760 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.469877 kubelet[2664]: E0906 01:00:56.469794 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.470267 kubelet[2664]: E0906 01:00:56.470242 2664 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 6 01:00:56.470379 kubelet[2664]: W0906 01:00:56.470267 2664 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 6 01:00:56.470379 kubelet[2664]: E0906 01:00:56.470294 2664 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 6 01:00:56.802699 env[1665]: time="2025-09-06T01:00:56.802645059Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:56.803243 env[1665]: time="2025-09-06T01:00:56.803201582Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:56.803799 env[1665]: time="2025-09-06T01:00:56.803758914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:56.804389 env[1665]: time="2025-09-06T01:00:56.804347313Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:00:56.804735 env[1665]: time="2025-09-06T01:00:56.804686628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 6 01:00:56.805729 env[1665]: time="2025-09-06T01:00:56.805715703Z" level=info msg="CreateContainer within sandbox \"323581c8b648002f87b1d3bda4898ad0ed82ad7b86dcdd2d2577c161b0e83ff2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 6 01:00:56.810580 env[1665]: time="2025-09-06T01:00:56.810535521Z" level=info msg="CreateContainer within sandbox \"323581c8b648002f87b1d3bda4898ad0ed82ad7b86dcdd2d2577c161b0e83ff2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e66bbe2d3aefbd14b0ece3b48857761a701718b831b438205a9de6bf64643ef6\"" Sep 6 01:00:56.810869 env[1665]: time="2025-09-06T01:00:56.810853702Z" level=info msg="StartContainer for \"e66bbe2d3aefbd14b0ece3b48857761a701718b831b438205a9de6bf64643ef6\"" Sep 6 01:00:56.834453 env[1665]: time="2025-09-06T01:00:56.834421430Z" level=info msg="StartContainer for \"e66bbe2d3aefbd14b0ece3b48857761a701718b831b438205a9de6bf64643ef6\" returns successfully" Sep 6 01:00:56.847605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e66bbe2d3aefbd14b0ece3b48857761a701718b831b438205a9de6bf64643ef6-rootfs.mount: Deactivated successfully. Sep 6 01:00:57.286842 env[1665]: time="2025-09-06T01:00:57.286745946Z" level=info msg="shim disconnected" id=e66bbe2d3aefbd14b0ece3b48857761a701718b831b438205a9de6bf64643ef6 Sep 6 01:00:57.287309 env[1665]: time="2025-09-06T01:00:57.286841610Z" level=warning msg="cleaning up after shim disconnected" id=e66bbe2d3aefbd14b0ece3b48857761a701718b831b438205a9de6bf64643ef6 namespace=k8s.io Sep 6 01:00:57.287309 env[1665]: time="2025-09-06T01:00:57.286885029Z" level=info msg="cleaning up dead shim" Sep 6 01:00:57.295314 env[1665]: time="2025-09-06T01:00:57.295290215Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:00:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3517 runtime=io.containerd.runc.v2\n" Sep 6 01:00:57.304578 kubelet[2664]: E0906 01:00:57.304515 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:57.393478 env[1665]: time="2025-09-06T01:00:57.393387183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 6 01:00:59.305145 kubelet[2664]: E0906 01:00:59.305015 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:00:59.499159 kubelet[2664]: I0906 01:00:59.499067 2664 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:00:59.531000 audit[3540]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:59.559178 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 6 01:00:59.559279 kernel: audit: type=1325 audit(1757120459.531:276): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:59.531000 audit[3540]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc74927800 a2=0 a3=7ffc749277ec items=0 ppid=2820 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:59.716975 kernel: audit: type=1300 audit(1757120459.531:276): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc74927800 a2=0 a3=7ffc749277ec items=0 ppid=2820 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:59.717011 kernel: audit: type=1327 audit(1757120459.531:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:59.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:59.775553 kernel: audit: type=1325 audit(1757120459.717:277): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:59.717000 audit[3540]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:00:59.834788 kernel: audit: type=1300 audit(1757120459.717:277): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc74927800 a2=0 a3=7ffc749277ec items=0 ppid=2820 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:59.717000 audit[3540]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc74927800 a2=0 a3=7ffc749277ec items=0 ppid=2820 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:00:59.932974 kernel: audit: type=1327 audit(1757120459.717:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:00:59.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:01.305488 kubelet[2664]: E0906 01:01:01.305353 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:01:03.304805 kubelet[2664]: E0906 01:01:03.304735 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:01:05.305550 kubelet[2664]: E0906 01:01:05.305400 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:01:06.110832 env[1665]: time="2025-09-06T01:01:06.110716042Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:06.112615 env[1665]: time="2025-09-06T01:01:06.112480886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:06.116068 env[1665]: time="2025-09-06T01:01:06.115973516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:06.119597 env[1665]: time="2025-09-06T01:01:06.119497761Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:06.121266 env[1665]: time="2025-09-06T01:01:06.121150356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 6 01:01:06.126039 env[1665]: time="2025-09-06T01:01:06.125898781Z" level=info msg="CreateContainer within sandbox \"323581c8b648002f87b1d3bda4898ad0ed82ad7b86dcdd2d2577c161b0e83ff2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 6 01:01:06.135168 env[1665]: time="2025-09-06T01:01:06.135124302Z" level=info msg="CreateContainer within sandbox \"323581c8b648002f87b1d3bda4898ad0ed82ad7b86dcdd2d2577c161b0e83ff2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1bde474f6f7e50b1b5a201f495417d01b94239316b4da460e636a5b92bd435ae\"" Sep 6 01:01:06.135387 env[1665]: time="2025-09-06T01:01:06.135371314Z" level=info msg="StartContainer for \"1bde474f6f7e50b1b5a201f495417d01b94239316b4da460e636a5b92bd435ae\"" Sep 6 01:01:06.158726 env[1665]: time="2025-09-06T01:01:06.158700512Z" level=info msg="StartContainer for \"1bde474f6f7e50b1b5a201f495417d01b94239316b4da460e636a5b92bd435ae\" returns successfully" Sep 6 01:01:07.012084 env[1665]: time="2025-09-06T01:01:07.011921008Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:01:07.054978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bde474f6f7e50b1b5a201f495417d01b94239316b4da460e636a5b92bd435ae-rootfs.mount: Deactivated successfully. Sep 6 01:01:07.120065 kubelet[2664]: I0906 01:01:07.120009 2664 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 01:01:07.239121 kubelet[2664]: I0906 01:01:07.239045 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/785dbad3-bee5-4a35-b5c9-4f3a631bbb6f-config\") pod \"goldmane-7988f88666-p9tg6\" (UID: \"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f\") " pod="calico-system/goldmane-7988f88666-p9tg6" Sep 6 01:01:07.239505 kubelet[2664]: I0906 01:01:07.239152 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-whisker-ca-bundle\") pod \"whisker-6fb65f8bb9-d7c4l\" (UID: \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\") " pod="calico-system/whisker-6fb65f8bb9-d7c4l" Sep 6 01:01:07.239505 kubelet[2664]: I0906 01:01:07.239250 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff-calico-apiserver-certs\") pod \"calico-apiserver-6dff7d77c6-jpzhj\" (UID: \"ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff\") " pod="calico-apiserver/calico-apiserver-6dff7d77c6-jpzhj" Sep 6 01:01:07.239505 kubelet[2664]: I0906 01:01:07.239333 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b315d0a3-5dae-40db-b478-5f6cd0b453cc-tigera-ca-bundle\") pod \"calico-kube-controllers-54d97c87d7-rc6mk\" (UID: \"b315d0a3-5dae-40db-b478-5f6cd0b453cc\") " pod="calico-system/calico-kube-controllers-54d97c87d7-rc6mk" Sep 6 01:01:07.239505 kubelet[2664]: I0906 01:01:07.239438 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-whisker-backend-key-pair\") pod \"whisker-6fb65f8bb9-d7c4l\" (UID: \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\") " pod="calico-system/whisker-6fb65f8bb9-d7c4l" Sep 6 01:01:07.240254 kubelet[2664]: I0906 01:01:07.239534 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/785dbad3-bee5-4a35-b5c9-4f3a631bbb6f-goldmane-key-pair\") pod \"goldmane-7988f88666-p9tg6\" (UID: \"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f\") " pod="calico-system/goldmane-7988f88666-p9tg6" Sep 6 01:01:07.240254 kubelet[2664]: I0906 01:01:07.239640 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ptns\" (UniqueName: \"kubernetes.io/projected/e248ec61-097e-4508-8c68-e2d9a1c01f4b-kube-api-access-8ptns\") pod \"calico-apiserver-6dff7d77c6-lds5m\" (UID: \"e248ec61-097e-4508-8c68-e2d9a1c01f4b\") " pod="calico-apiserver/calico-apiserver-6dff7d77c6-lds5m" Sep 6 01:01:07.240254 kubelet[2664]: I0906 01:01:07.239765 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/785dbad3-bee5-4a35-b5c9-4f3a631bbb6f-goldmane-ca-bundle\") pod \"goldmane-7988f88666-p9tg6\" (UID: \"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f\") " pod="calico-system/goldmane-7988f88666-p9tg6" Sep 6 01:01:07.240254 kubelet[2664]: I0906 01:01:07.239872 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85744052-5f5c-49af-a21e-68c0336acf1a-config-volume\") pod \"coredns-7c65d6cfc9-q529h\" (UID: \"85744052-5f5c-49af-a21e-68c0336acf1a\") " pod="kube-system/coredns-7c65d6cfc9-q529h" Sep 6 01:01:07.240254 kubelet[2664]: I0906 01:01:07.239955 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwf5m\" (UniqueName: \"kubernetes.io/projected/85744052-5f5c-49af-a21e-68c0336acf1a-kube-api-access-nwf5m\") pod \"coredns-7c65d6cfc9-q529h\" (UID: \"85744052-5f5c-49af-a21e-68c0336acf1a\") " pod="kube-system/coredns-7c65d6cfc9-q529h" Sep 6 01:01:07.240935 kubelet[2664]: I0906 01:01:07.240051 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgmbd\" (UniqueName: \"kubernetes.io/projected/ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff-kube-api-access-sgmbd\") pod \"calico-apiserver-6dff7d77c6-jpzhj\" (UID: \"ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff\") " pod="calico-apiserver/calico-apiserver-6dff7d77c6-jpzhj" Sep 6 01:01:07.240935 kubelet[2664]: I0906 01:01:07.240139 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44ss5\" (UniqueName: \"kubernetes.io/projected/b315d0a3-5dae-40db-b478-5f6cd0b453cc-kube-api-access-44ss5\") pod \"calico-kube-controllers-54d97c87d7-rc6mk\" (UID: \"b315d0a3-5dae-40db-b478-5f6cd0b453cc\") " pod="calico-system/calico-kube-controllers-54d97c87d7-rc6mk" Sep 6 01:01:07.240935 kubelet[2664]: I0906 01:01:07.240230 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-544cp\" (UniqueName: \"kubernetes.io/projected/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-kube-api-access-544cp\") pod \"whisker-6fb65f8bb9-d7c4l\" (UID: \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\") " pod="calico-system/whisker-6fb65f8bb9-d7c4l" Sep 6 01:01:07.240935 kubelet[2664]: I0906 01:01:07.240313 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkz5\" (UniqueName: \"kubernetes.io/projected/785dbad3-bee5-4a35-b5c9-4f3a631bbb6f-kube-api-access-pdkz5\") pod \"goldmane-7988f88666-p9tg6\" (UID: \"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f\") " pod="calico-system/goldmane-7988f88666-p9tg6" Sep 6 01:01:07.240935 kubelet[2664]: I0906 01:01:07.240398 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba5493ae-5339-4a9d-82a7-ea7e297cbb1f-config-volume\") pod \"coredns-7c65d6cfc9-8b5k6\" (UID: \"ba5493ae-5339-4a9d-82a7-ea7e297cbb1f\") " pod="kube-system/coredns-7c65d6cfc9-8b5k6" Sep 6 01:01:07.241416 kubelet[2664]: I0906 01:01:07.240556 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e248ec61-097e-4508-8c68-e2d9a1c01f4b-calico-apiserver-certs\") pod \"calico-apiserver-6dff7d77c6-lds5m\" (UID: \"e248ec61-097e-4508-8c68-e2d9a1c01f4b\") " pod="calico-apiserver/calico-apiserver-6dff7d77c6-lds5m" Sep 6 01:01:07.241416 kubelet[2664]: I0906 01:01:07.240639 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbbdf\" (UniqueName: \"kubernetes.io/projected/ba5493ae-5339-4a9d-82a7-ea7e297cbb1f-kube-api-access-mbbdf\") pod \"coredns-7c65d6cfc9-8b5k6\" (UID: \"ba5493ae-5339-4a9d-82a7-ea7e297cbb1f\") " pod="kube-system/coredns-7c65d6cfc9-8b5k6" Sep 6 01:01:07.313127 env[1665]: time="2025-09-06T01:01:07.312918821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twvsq,Uid:993f1b5f-0640-48b4-ae84-9939b8fffa94,Namespace:calico-system,Attempt:0,}" Sep 6 01:01:07.403728 env[1665]: time="2025-09-06T01:01:07.403658913Z" level=info msg="shim disconnected" id=1bde474f6f7e50b1b5a201f495417d01b94239316b4da460e636a5b92bd435ae Sep 6 01:01:07.403948 env[1665]: time="2025-09-06T01:01:07.403726846Z" level=warning msg="cleaning up after shim disconnected" id=1bde474f6f7e50b1b5a201f495417d01b94239316b4da460e636a5b92bd435ae namespace=k8s.io Sep 6 01:01:07.403948 env[1665]: time="2025-09-06T01:01:07.403747964Z" level=info msg="cleaning up dead shim" Sep 6 01:01:07.413426 env[1665]: time="2025-09-06T01:01:07.413379561Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:01:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3616 runtime=io.containerd.runc.v2\n" Sep 6 01:01:07.420290 env[1665]: time="2025-09-06T01:01:07.420262377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 6 01:01:07.445043 env[1665]: time="2025-09-06T01:01:07.444964273Z" level=error msg="Failed to destroy network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.445290 env[1665]: time="2025-09-06T01:01:07.445246113Z" level=error msg="encountered an error cleaning up failed sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.445340 env[1665]: time="2025-09-06T01:01:07.445285332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twvsq,Uid:993f1b5f-0640-48b4-ae84-9939b8fffa94,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.445497 kubelet[2664]: E0906 01:01:07.445468 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.445577 kubelet[2664]: E0906 01:01:07.445518 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-twvsq" Sep 6 01:01:07.445577 kubelet[2664]: E0906 01:01:07.445538 2664 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-twvsq" Sep 6 01:01:07.445644 kubelet[2664]: E0906 01:01:07.445575 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-twvsq_calico-system(993f1b5f-0640-48b4-ae84-9939b8fffa94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-twvsq_calico-system(993f1b5f-0640-48b4-ae84-9939b8fffa94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:01:07.466127 env[1665]: time="2025-09-06T01:01:07.466046754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d97c87d7-rc6mk,Uid:b315d0a3-5dae-40db-b478-5f6cd0b453cc,Namespace:calico-system,Attempt:0,}" Sep 6 01:01:07.469472 env[1665]: time="2025-09-06T01:01:07.469369302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8b5k6,Uid:ba5493ae-5339-4a9d-82a7-ea7e297cbb1f,Namespace:kube-system,Attempt:0,}" Sep 6 01:01:07.472455 env[1665]: time="2025-09-06T01:01:07.472368335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q529h,Uid:85744052-5f5c-49af-a21e-68c0336acf1a,Namespace:kube-system,Attempt:0,}" Sep 6 01:01:07.472978 env[1665]: time="2025-09-06T01:01:07.472907321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff7d77c6-lds5m,Uid:e248ec61-097e-4508-8c68-e2d9a1c01f4b,Namespace:calico-apiserver,Attempt:0,}" Sep 6 01:01:07.476098 env[1665]: time="2025-09-06T01:01:07.476010840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fb65f8bb9-d7c4l,Uid:d0a8074e-43ab-4946-8041-a4dd84c1e0ab,Namespace:calico-system,Attempt:0,}" Sep 6 01:01:07.478339 env[1665]: time="2025-09-06T01:01:07.478230980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff7d77c6-jpzhj,Uid:ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff,Namespace:calico-apiserver,Attempt:0,}" Sep 6 01:01:07.479005 env[1665]: time="2025-09-06T01:01:07.478707069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-p9tg6,Uid:785dbad3-bee5-4a35-b5c9-4f3a631bbb6f,Namespace:calico-system,Attempt:0,}" Sep 6 01:01:07.571830 env[1665]: time="2025-09-06T01:01:07.571192863Z" level=error msg="Failed to destroy network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.571830 env[1665]: time="2025-09-06T01:01:07.571533370Z" level=error msg="encountered an error cleaning up failed sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.571830 env[1665]: time="2025-09-06T01:01:07.571572967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d97c87d7-rc6mk,Uid:b315d0a3-5dae-40db-b478-5f6cd0b453cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.572051 kubelet[2664]: E0906 01:01:07.571747 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.572051 kubelet[2664]: E0906 01:01:07.571806 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d97c87d7-rc6mk" Sep 6 01:01:07.572051 kubelet[2664]: E0906 01:01:07.571826 2664 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d97c87d7-rc6mk" Sep 6 01:01:07.572167 kubelet[2664]: E0906 01:01:07.571868 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54d97c87d7-rc6mk_calico-system(b315d0a3-5dae-40db-b478-5f6cd0b453cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54d97c87d7-rc6mk_calico-system(b315d0a3-5dae-40db-b478-5f6cd0b453cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54d97c87d7-rc6mk" podUID="b315d0a3-5dae-40db-b478-5f6cd0b453cc" Sep 6 01:01:07.574930 env[1665]: time="2025-09-06T01:01:07.574884434Z" level=error msg="Failed to destroy network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.575186 env[1665]: time="2025-09-06T01:01:07.575166466Z" level=error msg="encountered an error cleaning up failed sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.575227 env[1665]: time="2025-09-06T01:01:07.575202364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q529h,Uid:85744052-5f5c-49af-a21e-68c0336acf1a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.575354 kubelet[2664]: E0906 01:01:07.575322 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.575403 kubelet[2664]: E0906 01:01:07.575375 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q529h" Sep 6 01:01:07.575403 kubelet[2664]: E0906 01:01:07.575395 2664 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q529h" Sep 6 01:01:07.575476 kubelet[2664]: E0906 01:01:07.575434 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q529h_kube-system(85744052-5f5c-49af-a21e-68c0336acf1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q529h_kube-system(85744052-5f5c-49af-a21e-68c0336acf1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q529h" podUID="85744052-5f5c-49af-a21e-68c0336acf1a" Sep 6 01:01:07.576272 env[1665]: time="2025-09-06T01:01:07.576236311Z" level=error msg="Failed to destroy network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.576491 env[1665]: time="2025-09-06T01:01:07.576465253Z" level=error msg="encountered an error cleaning up failed sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.576553 env[1665]: time="2025-09-06T01:01:07.576504095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8b5k6,Uid:ba5493ae-5339-4a9d-82a7-ea7e297cbb1f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.576671 kubelet[2664]: E0906 01:01:07.576641 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.576721 kubelet[2664]: E0906 01:01:07.576690 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8b5k6" Sep 6 01:01:07.576721 kubelet[2664]: E0906 01:01:07.576705 2664 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8b5k6" Sep 6 01:01:07.576794 kubelet[2664]: E0906 01:01:07.576731 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8b5k6_kube-system(ba5493ae-5339-4a9d-82a7-ea7e297cbb1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8b5k6_kube-system(ba5493ae-5339-4a9d-82a7-ea7e297cbb1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8b5k6" podUID="ba5493ae-5339-4a9d-82a7-ea7e297cbb1f" Sep 6 01:01:07.577048 env[1665]: time="2025-09-06T01:01:07.577026111Z" level=error msg="Failed to destroy network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.577148 env[1665]: time="2025-09-06T01:01:07.577132591Z" level=error msg="Failed to destroy network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.577239 env[1665]: time="2025-09-06T01:01:07.577220920Z" level=error msg="encountered an error cleaning up failed sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.577268 env[1665]: time="2025-09-06T01:01:07.577251289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-p9tg6,Uid:785dbad3-bee5-4a35-b5c9-4f3a631bbb6f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.577319 env[1665]: time="2025-09-06T01:01:07.577304286Z" level=error msg="encountered an error cleaning up failed sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.577343 env[1665]: time="2025-09-06T01:01:07.577327906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fb65f8bb9-d7c4l,Uid:d0a8074e-43ab-4946-8041-a4dd84c1e0ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.577378 kubelet[2664]: E0906 01:01:07.577345 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.577378 kubelet[2664]: E0906 01:01:07.577372 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-p9tg6" Sep 6 01:01:07.577463 kubelet[2664]: E0906 01:01:07.577386 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.577463 kubelet[2664]: E0906 01:01:07.577391 2664 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-p9tg6" Sep 6 01:01:07.577463 kubelet[2664]: E0906 01:01:07.577405 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fb65f8bb9-d7c4l" Sep 6 01:01:07.577463 kubelet[2664]: E0906 01:01:07.577415 2664 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fb65f8bb9-d7c4l" Sep 6 01:01:07.577564 kubelet[2664]: E0906 01:01:07.577423 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-p9tg6_calico-system(785dbad3-bee5-4a35-b5c9-4f3a631bbb6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-p9tg6_calico-system(785dbad3-bee5-4a35-b5c9-4f3a631bbb6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-p9tg6" podUID="785dbad3-bee5-4a35-b5c9-4f3a631bbb6f" Sep 6 01:01:07.577564 kubelet[2664]: E0906 01:01:07.577448 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fb65f8bb9-d7c4l_calico-system(d0a8074e-43ab-4946-8041-a4dd84c1e0ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fb65f8bb9-d7c4l_calico-system(d0a8074e-43ab-4946-8041-a4dd84c1e0ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fb65f8bb9-d7c4l" podUID="d0a8074e-43ab-4946-8041-a4dd84c1e0ab" Sep 6 01:01:07.578838 env[1665]: time="2025-09-06T01:01:07.578803504Z" level=error msg="Failed to destroy network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.579058 env[1665]: time="2025-09-06T01:01:07.579038862Z" level=error msg="encountered an error cleaning up failed sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.579099 env[1665]: time="2025-09-06T01:01:07.579077946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff7d77c6-lds5m,Uid:e248ec61-097e-4508-8c68-e2d9a1c01f4b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.579193 kubelet[2664]: E0906 01:01:07.579175 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.579244 kubelet[2664]: E0906 01:01:07.579201 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff7d77c6-lds5m" Sep 6 01:01:07.579244 kubelet[2664]: E0906 01:01:07.579211 2664 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff7d77c6-lds5m" Sep 6 01:01:07.579244 kubelet[2664]: E0906 01:01:07.579233 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff7d77c6-lds5m_calico-apiserver(e248ec61-097e-4508-8c68-e2d9a1c01f4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff7d77c6-lds5m_calico-apiserver(e248ec61-097e-4508-8c68-e2d9a1c01f4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff7d77c6-lds5m" podUID="e248ec61-097e-4508-8c68-e2d9a1c01f4b" Sep 6 01:01:07.581136 env[1665]: time="2025-09-06T01:01:07.581117006Z" level=error msg="Failed to destroy network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.581278 env[1665]: time="2025-09-06T01:01:07.581264074Z" level=error msg="encountered an error cleaning up failed sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.581302 env[1665]: time="2025-09-06T01:01:07.581286393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff7d77c6-jpzhj,Uid:ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.581372 kubelet[2664]: E0906 01:01:07.581361 2664 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:07.581403 kubelet[2664]: E0906 01:01:07.581383 2664 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff7d77c6-jpzhj" Sep 6 01:01:07.581403 kubelet[2664]: E0906 01:01:07.581395 2664 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dff7d77c6-jpzhj" Sep 6 01:01:07.581451 kubelet[2664]: E0906 01:01:07.581413 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dff7d77c6-jpzhj_calico-apiserver(ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dff7d77c6-jpzhj_calico-apiserver(ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff7d77c6-jpzhj" podUID="ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff" Sep 6 01:01:08.356655 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce-shm.mount: Deactivated successfully. Sep 6 01:01:08.424565 kubelet[2664]: I0906 01:01:08.424493 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:08.425684 env[1665]: time="2025-09-06T01:01:08.425577108Z" level=info msg="StopPodSandbox for \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\"" Sep 6 01:01:08.426553 kubelet[2664]: I0906 01:01:08.426467 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:08.427726 env[1665]: time="2025-09-06T01:01:08.427645821Z" level=info msg="StopPodSandbox for \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\"" Sep 6 01:01:08.428529 kubelet[2664]: I0906 01:01:08.428476 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:08.429716 env[1665]: time="2025-09-06T01:01:08.429643077Z" level=info msg="StopPodSandbox for \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\"" Sep 6 01:01:08.430637 kubelet[2664]: I0906 01:01:08.430577 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:08.431987 env[1665]: time="2025-09-06T01:01:08.431916203Z" level=info msg="StopPodSandbox for \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\"" Sep 6 01:01:08.432685 kubelet[2664]: I0906 01:01:08.432623 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:08.434065 env[1665]: time="2025-09-06T01:01:08.433980390Z" level=info msg="StopPodSandbox for \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\"" Sep 6 01:01:08.434991 kubelet[2664]: I0906 01:01:08.434917 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:01:08.436935 env[1665]: time="2025-09-06T01:01:08.436804448Z" level=info msg="StopPodSandbox for \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\"" Sep 6 01:01:08.438597 kubelet[2664]: I0906 01:01:08.438495 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:01:08.439556 env[1665]: time="2025-09-06T01:01:08.439527482Z" level=info msg="StopPodSandbox for \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\"" Sep 6 01:01:08.440778 kubelet[2664]: I0906 01:01:08.440746 2664 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:08.441433 env[1665]: time="2025-09-06T01:01:08.441390659Z" level=info msg="StopPodSandbox for \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\"" Sep 6 01:01:08.453341 env[1665]: time="2025-09-06T01:01:08.453291625Z" level=error msg="StopPodSandbox for \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\" failed" error="failed to destroy network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:08.453521 kubelet[2664]: E0906 01:01:08.453491 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:08.453586 kubelet[2664]: E0906 01:01:08.453551 2664 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833"} Sep 6 01:01:08.453620 kubelet[2664]: E0906 01:01:08.453609 2664 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba5493ae-5339-4a9d-82a7-ea7e297cbb1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:01:08.453674 kubelet[2664]: E0906 01:01:08.453632 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba5493ae-5339-4a9d-82a7-ea7e297cbb1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8b5k6" podUID="ba5493ae-5339-4a9d-82a7-ea7e297cbb1f" Sep 6 01:01:08.454112 env[1665]: time="2025-09-06T01:01:08.454083207Z" level=error msg="StopPodSandbox for \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\" failed" error="failed to destroy network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:08.454112 env[1665]: time="2025-09-06T01:01:08.454095205Z" level=error msg="StopPodSandbox for \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\" failed" error="failed to destroy network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:08.454214 kubelet[2664]: E0906 01:01:08.454197 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:08.454250 kubelet[2664]: E0906 01:01:08.454224 2664 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b"} Sep 6 01:01:08.454275 kubelet[2664]: E0906 01:01:08.454249 2664 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:01:08.454330 kubelet[2664]: E0906 01:01:08.454199 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:08.454330 kubelet[2664]: E0906 01:01:08.454294 2664 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79"} Sep 6 01:01:08.454330 kubelet[2664]: E0906 01:01:08.454321 2664 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e248ec61-097e-4508-8c68-e2d9a1c01f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:01:08.454444 kubelet[2664]: E0906 01:01:08.454269 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fb65f8bb9-d7c4l" podUID="d0a8074e-43ab-4946-8041-a4dd84c1e0ab" Sep 6 01:01:08.454444 kubelet[2664]: E0906 01:01:08.454341 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e248ec61-097e-4508-8c68-e2d9a1c01f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff7d77c6-lds5m" podUID="e248ec61-097e-4508-8c68-e2d9a1c01f4b" Sep 6 01:01:08.455732 env[1665]: time="2025-09-06T01:01:08.455693166Z" level=error msg="StopPodSandbox for \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\" failed" error="failed to destroy network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:08.455864 kubelet[2664]: E0906 01:01:08.455838 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:08.455914 kubelet[2664]: E0906 01:01:08.455874 2664 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff"} Sep 6 01:01:08.455914 kubelet[2664]: E0906 01:01:08.455902 2664 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:01:08.456004 kubelet[2664]: E0906 01:01:08.455921 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-p9tg6" podUID="785dbad3-bee5-4a35-b5c9-4f3a631bbb6f" Sep 6 01:01:08.456234 env[1665]: time="2025-09-06T01:01:08.456212692Z" level=error msg="StopPodSandbox for \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\" failed" error="failed to destroy network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:08.456304 kubelet[2664]: E0906 01:01:08.456288 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:01:08.456349 kubelet[2664]: E0906 01:01:08.456308 2664 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5"} Sep 6 01:01:08.456349 kubelet[2664]: E0906 01:01:08.456323 2664 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85744052-5f5c-49af-a21e-68c0336acf1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:01:08.456349 kubelet[2664]: E0906 01:01:08.456334 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85744052-5f5c-49af-a21e-68c0336acf1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q529h" podUID="85744052-5f5c-49af-a21e-68c0336acf1a" Sep 6 01:01:08.456455 kubelet[2664]: E0906 01:01:08.456416 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:01:08.456455 kubelet[2664]: E0906 01:01:08.456437 2664 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece"} Sep 6 01:01:08.456455 kubelet[2664]: E0906 01:01:08.456450 2664 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:01:08.456531 env[1665]: time="2025-09-06T01:01:08.456346029Z" level=error msg="StopPodSandbox for \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\" failed" error="failed to destroy network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:08.456554 kubelet[2664]: E0906 01:01:08.456460 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dff7d77c6-jpzhj" podUID="ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff" Sep 6 01:01:08.456832 env[1665]: time="2025-09-06T01:01:08.456807219Z" level=error msg="StopPodSandbox for \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\" failed" error="failed to destroy network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:08.456888 kubelet[2664]: E0906 01:01:08.456876 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:08.456922 kubelet[2664]: E0906 01:01:08.456891 2664 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce"} Sep 6 01:01:08.456922 kubelet[2664]: E0906 01:01:08.456906 2664 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"993f1b5f-0640-48b4-ae84-9939b8fffa94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:01:08.456922 kubelet[2664]: E0906 01:01:08.456917 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"993f1b5f-0640-48b4-ae84-9939b8fffa94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-twvsq" podUID="993f1b5f-0640-48b4-ae84-9939b8fffa94" Sep 6 01:01:08.460191 env[1665]: time="2025-09-06T01:01:08.460145157Z" level=error msg="StopPodSandbox for \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\" failed" error="failed to destroy network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 6 01:01:08.460234 kubelet[2664]: E0906 01:01:08.460214 2664 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:08.460261 kubelet[2664]: E0906 01:01:08.460235 2664 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a"} Sep 6 01:01:08.460261 kubelet[2664]: E0906 01:01:08.460250 2664 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b315d0a3-5dae-40db-b478-5f6cd0b453cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 6 01:01:08.460316 kubelet[2664]: E0906 01:01:08.460261 2664 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b315d0a3-5dae-40db-b478-5f6cd0b453cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54d97c87d7-rc6mk" podUID="b315d0a3-5dae-40db-b478-5f6cd0b453cc" Sep 6 01:01:14.189185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2131705395.mount: Deactivated successfully. Sep 6 01:01:14.205121 env[1665]: time="2025-09-06T01:01:14.205070166Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:14.205538 env[1665]: time="2025-09-06T01:01:14.205502162Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:14.206083 env[1665]: time="2025-09-06T01:01:14.206049673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:14.206648 env[1665]: time="2025-09-06T01:01:14.206634965Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:14.206969 env[1665]: time="2025-09-06T01:01:14.206956550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 6 01:01:14.211243 env[1665]: time="2025-09-06T01:01:14.211223642Z" level=info msg="CreateContainer within sandbox \"323581c8b648002f87b1d3bda4898ad0ed82ad7b86dcdd2d2577c161b0e83ff2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 6 01:01:14.216273 env[1665]: time="2025-09-06T01:01:14.216255300Z" level=info msg="CreateContainer within sandbox \"323581c8b648002f87b1d3bda4898ad0ed82ad7b86dcdd2d2577c161b0e83ff2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2ab21bb3b46ddbe2791609c42c7b11d622b3510175d07e8312bd1443727e81b1\"" Sep 6 01:01:14.216523 env[1665]: time="2025-09-06T01:01:14.216482162Z" level=info msg="StartContainer for \"2ab21bb3b46ddbe2791609c42c7b11d622b3510175d07e8312bd1443727e81b1\"" Sep 6 01:01:14.240211 env[1665]: time="2025-09-06T01:01:14.240158920Z" level=info msg="StartContainer for \"2ab21bb3b46ddbe2791609c42c7b11d622b3510175d07e8312bd1443727e81b1\" returns successfully" Sep 6 01:01:14.354968 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 6 01:01:14.355032 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 6 01:01:14.420279 env[1665]: time="2025-09-06T01:01:14.420185459Z" level=info msg="StopPodSandbox for \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\"" Sep 6 01:01:14.463607 kubelet[2664]: I0906 01:01:14.463511 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rv8rk" podStartSLOduration=1.31154141 podStartE2EDuration="35.46349553s" podCreationTimestamp="2025-09-06 01:00:39 +0000 UTC" firstStartedPulling="2025-09-06 01:00:40.05561441 +0000 UTC m=+16.853351113" lastFinishedPulling="2025-09-06 01:01:14.207568551 +0000 UTC m=+51.005305233" observedRunningTime="2025-09-06 01:01:14.46300097 +0000 UTC m=+51.260737657" watchObservedRunningTime="2025-09-06 01:01:14.46349553 +0000 UTC m=+51.261232214" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.462 [INFO][4210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.462 [INFO][4210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" iface="eth0" netns="/var/run/netns/cni-b8d86082-e4e4-65ba-c5c8-61d26821cb60" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.462 [INFO][4210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" iface="eth0" netns="/var/run/netns/cni-b8d86082-e4e4-65ba-c5c8-61d26821cb60" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.462 [INFO][4210] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" iface="eth0" netns="/var/run/netns/cni-b8d86082-e4e4-65ba-c5c8-61d26821cb60" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.462 [INFO][4210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.462 [INFO][4210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.473 [INFO][4244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" HandleID="k8s-pod-network.8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.473 [INFO][4244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.473 [INFO][4244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.477 [WARNING][4244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" HandleID="k8s-pod-network.8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.477 [INFO][4244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" HandleID="k8s-pod-network.8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.478 [INFO][4244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:14.482477 env[1665]: 2025-09-06 01:01:14.480 [INFO][4210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:14.482780 env[1665]: time="2025-09-06T01:01:14.482549889Z" level=info msg="TearDown network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\" successfully" Sep 6 01:01:14.482780 env[1665]: time="2025-09-06T01:01:14.482567730Z" level=info msg="StopPodSandbox for \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\" returns successfully" Sep 6 01:01:14.588764 kubelet[2664]: I0906 01:01:14.588699 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-whisker-ca-bundle\") pod \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\" (UID: \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\") " Sep 6 01:01:14.589012 kubelet[2664]: I0906 01:01:14.588796 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-544cp\" (UniqueName: \"kubernetes.io/projected/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-kube-api-access-544cp\") pod \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\" (UID: \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\") " Sep 6 01:01:14.589012 kubelet[2664]: I0906 01:01:14.588887 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-whisker-backend-key-pair\") pod \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\" (UID: \"d0a8074e-43ab-4946-8041-a4dd84c1e0ab\") " Sep 6 01:01:14.589489 kubelet[2664]: I0906 01:01:14.589380 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d0a8074e-43ab-4946-8041-a4dd84c1e0ab" (UID: "d0a8074e-43ab-4946-8041-a4dd84c1e0ab"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 01:01:14.594433 kubelet[2664]: I0906 01:01:14.594318 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d0a8074e-43ab-4946-8041-a4dd84c1e0ab" (UID: "d0a8074e-43ab-4946-8041-a4dd84c1e0ab"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 01:01:14.594433 kubelet[2664]: I0906 01:01:14.594391 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-kube-api-access-544cp" (OuterVolumeSpecName: "kube-api-access-544cp") pod "d0a8074e-43ab-4946-8041-a4dd84c1e0ab" (UID: "d0a8074e-43ab-4946-8041-a4dd84c1e0ab"). InnerVolumeSpecName "kube-api-access-544cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:01:14.689862 kubelet[2664]: I0906 01:01:14.689781 2664 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-544cp\" (UniqueName: \"kubernetes.io/projected/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-kube-api-access-544cp\") on node \"ci-3510.3.8-n-4cc2a8c2f2\" DevicePath \"\"" Sep 6 01:01:14.689862 kubelet[2664]: I0906 01:01:14.689833 2664 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-4cc2a8c2f2\" DevicePath \"\"" Sep 6 01:01:14.689862 kubelet[2664]: I0906 01:01:14.689860 2664 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0a8074e-43ab-4946-8041-a4dd84c1e0ab-whisker-ca-bundle\") on node \"ci-3510.3.8-n-4cc2a8c2f2\" DevicePath \"\"" Sep 6 01:01:15.194014 systemd[1]: run-netns-cni\x2db8d86082\x2de4e4\x2d65ba\x2dc5c8\x2d61d26821cb60.mount: Deactivated successfully. Sep 6 01:01:15.194114 systemd[1]: var-lib-kubelet-pods-d0a8074e\x2d43ab\x2d4946\x2d8041\x2da4dd84c1e0ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d544cp.mount: Deactivated successfully. Sep 6 01:01:15.194199 systemd[1]: var-lib-kubelet-pods-d0a8074e\x2d43ab\x2d4946\x2d8041\x2da4dd84c1e0ab-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 6 01:01:15.595925 kubelet[2664]: I0906 01:01:15.595315 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64w8\" (UniqueName: \"kubernetes.io/projected/38ea1cc5-6000-47a5-9833-a6ea43f225b9-kube-api-access-f64w8\") pod \"whisker-5579c8f56-68kf2\" (UID: \"38ea1cc5-6000-47a5-9833-a6ea43f225b9\") " pod="calico-system/whisker-5579c8f56-68kf2" Sep 6 01:01:15.595925 kubelet[2664]: I0906 01:01:15.595503 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38ea1cc5-6000-47a5-9833-a6ea43f225b9-whisker-backend-key-pair\") pod \"whisker-5579c8f56-68kf2\" (UID: \"38ea1cc5-6000-47a5-9833-a6ea43f225b9\") " pod="calico-system/whisker-5579c8f56-68kf2" Sep 6 01:01:15.595925 kubelet[2664]: I0906 01:01:15.595596 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38ea1cc5-6000-47a5-9833-a6ea43f225b9-whisker-ca-bundle\") pod \"whisker-5579c8f56-68kf2\" (UID: \"38ea1cc5-6000-47a5-9833-a6ea43f225b9\") " pod="calico-system/whisker-5579c8f56-68kf2" Sep 6 01:01:15.614000 audit[4368]: AVC avc: denied { write } for pid=4368 comm="tee" name="fd" dev="proc" ino=40225 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.614000 audit[4369]: AVC avc: denied { write } for pid=4369 comm="tee" name="fd" dev="proc" ino=42012 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.743359 kernel: audit: type=1400 audit(1757120475.614:278): avc: denied { write } for pid=4368 comm="tee" name="fd" dev="proc" ino=40225 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.743426 kernel: audit: type=1400 audit(1757120475.614:279): avc: denied { write } for pid=4369 comm="tee" name="fd" dev="proc" ino=42012 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.743444 kernel: audit: type=1400 audit(1757120475.614:280): avc: denied { write } for pid=4370 comm="tee" name="fd" dev="proc" ino=9177 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.614000 audit[4370]: AVC avc: denied { write } for pid=4370 comm="tee" name="fd" dev="proc" ino=9177 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.796259 env[1665]: time="2025-09-06T01:01:15.796225862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5579c8f56-68kf2,Uid:38ea1cc5-6000-47a5-9833-a6ea43f225b9,Namespace:calico-system,Attempt:0,}" Sep 6 01:01:15.614000 audit[4368]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa55e77cc a2=241 a3=1b6 items=1 ppid=4333 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.904220 kernel: audit: type=1300 audit(1757120475.614:278): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa55e77cc a2=241 a3=1b6 items=1 ppid=4333 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.904309 kernel: audit: type=1300 audit(1757120475.614:279): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc3e3b7cb a2=241 a3=1b6 items=1 ppid=4334 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.614000 audit[4369]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc3e3b7cb a2=241 a3=1b6 items=1 ppid=4334 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.614000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 6 01:01:16.030427 kernel: audit: type=1307 audit(1757120475.614:278): cwd="/etc/service/enabled/bird/log" Sep 6 01:01:16.030495 kernel: audit: type=1307 audit(1757120475.614:279): cwd="/etc/service/enabled/felix/log" Sep 6 01:01:15.614000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 6 01:01:15.614000 audit[4370]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca75667cb a2=241 a3=1b6 items=1 ppid=4335 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.154116 kernel: audit: type=1300 audit(1757120475.614:280): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca75667cb a2=241 a3=1b6 items=1 ppid=4335 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.154172 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Sep 6 01:01:16.154215 kernel: audit: type=1302 audit(1757120475.614:278): item=0 name="/dev/fd/63" inode=40222 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:01:15.614000 audit: PATH item=0 name="/dev/fd/63" inode=40222 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:01:15.614000 audit: PATH item=0 name="/dev/fd/63" inode=42009 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:01:15.614000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 6 01:01:15.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:01:15.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:01:15.614000 audit: PATH item=0 name="/dev/fd/63" inode=9174 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:01:15.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:01:15.614000 audit[4374]: AVC avc: denied { write } for pid=4374 comm="tee" name="fd" dev="proc" ino=17382 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.614000 audit[4374]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc805d07cb a2=241 a3=1b6 items=1 ppid=4339 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.614000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 6 01:01:15.614000 audit: PATH item=0 name="/dev/fd/63" inode=17379 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:01:15.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:01:15.614000 audit[4377]: AVC avc: denied { write } for pid=4377 comm="tee" name="fd" dev="proc" ino=39270 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.614000 audit[4377]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf8d137bc a2=241 a3=1b6 items=1 ppid=4336 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.614000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 6 01:01:15.614000 audit: PATH item=0 name="/dev/fd/63" inode=39267 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:01:15.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:01:15.615000 audit[4376]: AVC avc: denied { write } for pid=4376 comm="tee" name="fd" dev="proc" ino=33255 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.615000 audit[4376]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe2932c7bb a2=241 a3=1b6 items=1 ppid=4345 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.615000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 6 01:01:15.615000 audit: PATH item=0 name="/dev/fd/63" inode=33252 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:01:15.615000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:01:15.678000 audit[4379]: AVC avc: denied { write } for pid=4379 comm="tee" name="fd" dev="proc" ino=39274 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 6 01:01:15.678000 audit[4379]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe90a2d7cd a2=241 a3=1b6 items=1 ppid=4340 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.678000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 6 01:01:15.678000 audit: PATH item=0 name="/dev/fd/63" inode=35318 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:01:15.678000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 6 01:01:15.822000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.822000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.822000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.822000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.822000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.822000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.822000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.822000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.822000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.822000 audit: BPF prog-id=10 op=LOAD Sep 6 01:01:15.822000 audit[4506]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbb06ff40 a2=98 a3=1fffffffffffffff items=0 ppid=4342 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.822000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 01:01:15.999000 audit: BPF prog-id=10 op=UNLOAD Sep 6 01:01:15.999000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.999000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.999000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.999000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.999000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.999000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.999000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.999000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.999000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:15.999000 audit: BPF prog-id=11 op=LOAD Sep 6 01:01:15.999000 audit[4506]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbb06fe20 a2=94 a3=3 items=0 ppid=4342 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:15.999000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 01:01:16.059000 audit: BPF prog-id=11 op=UNLOAD Sep 6 01:01:16.059000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.059000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.059000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.059000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.059000 audit[4506]: AVC avc: denied { bpf } for pid=4506 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.059000 audit: BPF prog-id=12 op=LOAD Sep 6 01:01:16.059000 audit[4506]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbb06fe60 a2=94 a3=7ffdbb070040 items=0 ppid=4342 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.059000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 01:01:16.243000 audit: BPF prog-id=12 op=UNLOAD Sep 6 01:01:16.243000 audit[4506]: AVC avc: denied { perfmon } for pid=4506 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.243000 audit[4506]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffdbb06ff30 a2=50 a3=a000000085 items=0 ppid=4342 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.243000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit: BPF prog-id=13 op=LOAD Sep 6 01:01:16.244000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc18c455d0 a2=98 a3=3 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.244000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.244000 audit: BPF prog-id=13 op=UNLOAD Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit: BPF prog-id=14 op=LOAD Sep 6 01:01:16.244000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc18c453c0 a2=94 a3=54428f items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.244000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.244000 audit: BPF prog-id=14 op=UNLOAD Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.244000 audit: BPF prog-id=15 op=LOAD Sep 6 01:01:16.244000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc18c453f0 a2=94 a3=2 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.244000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.244000 audit: BPF prog-id=15 op=UNLOAD Sep 6 01:01:16.245068 systemd-networkd[1387]: cali3efab6d8328: Link UP Sep 6 01:01:16.300096 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:01:16.300146 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3efab6d8328: link becomes ready Sep 6 01:01:16.300351 systemd-networkd[1387]: cali3efab6d8328: Gained carrier Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.813 [INFO][4469] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.819 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0 whisker-5579c8f56- calico-system 38ea1cc5-6000-47a5-9833-a6ea43f225b9 911 0 2025-09-06 01:01:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5579c8f56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-4cc2a8c2f2 whisker-5579c8f56-68kf2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3efab6d8328 [] [] }} ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Namespace="calico-system" Pod="whisker-5579c8f56-68kf2" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.819 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Namespace="calico-system" Pod="whisker-5579c8f56-68kf2" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.831 [INFO][4503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" HandleID="k8s-pod-network.95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.831 [INFO][4503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" HandleID="k8s-pod-network.95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-4cc2a8c2f2", "pod":"whisker-5579c8f56-68kf2", "timestamp":"2025-09-06 01:01:15.831882823 +0000 UTC"}, Hostname:"ci-3510.3.8-n-4cc2a8c2f2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.831 [INFO][4503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.831 [INFO][4503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.832 [INFO][4503] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-4cc2a8c2f2' Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.836 [INFO][4503] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:15.904 [INFO][4503] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.000 [INFO][4503] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.002 [INFO][4503] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.003 [INFO][4503] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.003 [INFO][4503] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.004 [INFO][4503] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.006 [INFO][4503] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.009 [INFO][4503] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.1/26] block=192.168.41.0/26 handle="k8s-pod-network.95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.009 [INFO][4503] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.1/26] handle="k8s-pod-network.95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.009 [INFO][4503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:16.305658 env[1665]: 2025-09-06 01:01:16.009 [INFO][4503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.1/26] IPv6=[] ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" HandleID="k8s-pod-network.95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" Sep 6 01:01:16.306086 env[1665]: 2025-09-06 01:01:16.010 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Namespace="calico-system" Pod="whisker-5579c8f56-68kf2" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0", GenerateName:"whisker-5579c8f56-", Namespace:"calico-system", SelfLink:"", UID:"38ea1cc5-6000-47a5-9833-a6ea43f225b9", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5579c8f56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"", Pod:"whisker-5579c8f56-68kf2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.41.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3efab6d8328", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:16.306086 env[1665]: 2025-09-06 01:01:16.010 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.1/32] ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Namespace="calico-system" Pod="whisker-5579c8f56-68kf2" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" Sep 6 01:01:16.306086 env[1665]: 2025-09-06 01:01:16.010 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3efab6d8328 ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Namespace="calico-system" Pod="whisker-5579c8f56-68kf2" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" Sep 6 01:01:16.306086 env[1665]: 2025-09-06 01:01:16.300 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Namespace="calico-system" Pod="whisker-5579c8f56-68kf2" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" Sep 6 01:01:16.306086 env[1665]: 2025-09-06 01:01:16.300 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Namespace="calico-system" Pod="whisker-5579c8f56-68kf2" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0", GenerateName:"whisker-5579c8f56-", Namespace:"calico-system", SelfLink:"", UID:"38ea1cc5-6000-47a5-9833-a6ea43f225b9", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5579c8f56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e", Pod:"whisker-5579c8f56-68kf2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.41.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3efab6d8328", MAC:"b6:03:e9:99:bb:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:16.306086 env[1665]: 2025-09-06 01:01:16.304 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e" Namespace="calico-system" Pod="whisker-5579c8f56-68kf2" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--5579c8f56--68kf2-eth0" Sep 6 01:01:16.310119 env[1665]: time="2025-09-06T01:01:16.310086354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:01:16.310119 env[1665]: time="2025-09-06T01:01:16.310109576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:01:16.310202 env[1665]: time="2025-09-06T01:01:16.310120552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:01:16.310202 env[1665]: time="2025-09-06T01:01:16.310188045Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e pid=4540 runtime=io.containerd.runc.v2 Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit: BPF prog-id=16 op=LOAD Sep 6 01:01:16.331000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc18c452b0 a2=94 a3=1 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.331000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.331000 audit: BPF prog-id=16 op=UNLOAD Sep 6 01:01:16.331000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.331000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc18c45380 a2=50 a3=7ffc18c45460 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.331000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338632 env[1665]: time="2025-09-06T01:01:16.338605957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5579c8f56-68kf2,Uid:38ea1cc5-6000-47a5-9833-a6ea43f225b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e\"" Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc18c452c0 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc18c452f0 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc18c45200 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc18c45310 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc18c452f0 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc18c452e0 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc18c45310 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc18c452f0 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc18c45310 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc18c452e0 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc18c45350 a2=28 a3=0 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc18c45100 a2=50 a3=1 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit: BPF prog-id=17 op=LOAD Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc18c45100 a2=94 a3=5 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit: BPF prog-id=17 op=UNLOAD Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc18c451b0 a2=50 a3=1 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc18c452d0 a2=4 a3=38 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { confidentiality } for pid=4523 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc18c45320 a2=94 a3=6 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { confidentiality } for pid=4523 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc18c44ad0 a2=94 a3=88 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { perfmon } for pid=4523 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { bpf } for pid=4523 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.338000 audit[4523]: AVC avc: denied { confidentiality } for pid=4523 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:01:16.338000 audit[4523]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc18c44ad0 a2=94 a3=88 items=0 ppid=4342 pid=4523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 6 01:01:16.340758 env[1665]: time="2025-09-06T01:01:16.339346193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 6 01:01:16.342000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.342000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.342000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.342000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.342000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.342000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.342000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.342000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.342000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.342000 audit: BPF prog-id=18 op=LOAD Sep 6 01:01:16.342000 audit[4575]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca8657450 a2=98 a3=1999999999999999 items=0 ppid=4342 pid=4575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.342000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 01:01:16.343000 audit: BPF prog-id=18 op=UNLOAD Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit: BPF prog-id=19 op=LOAD Sep 6 01:01:16.343000 audit[4575]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca8657330 a2=94 a3=ffff items=0 ppid=4342 pid=4575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.343000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 01:01:16.343000 audit: BPF prog-id=19 op=UNLOAD Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { perfmon } for pid=4575 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit[4575]: AVC avc: denied { bpf } for pid=4575 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.343000 audit: BPF prog-id=20 op=LOAD Sep 6 01:01:16.343000 audit[4575]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca8657370 a2=94 a3=7ffca8657550 items=0 ppid=4342 pid=4575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.343000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 6 01:01:16.343000 audit: BPF prog-id=20 op=UNLOAD Sep 6 01:01:16.370647 systemd-networkd[1387]: vxlan.calico: Link UP Sep 6 01:01:16.370651 systemd-networkd[1387]: vxlan.calico: Gained carrier Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit: BPF prog-id=21 op=LOAD Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6b9c4050 a2=98 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit: BPF prog-id=21 op=UNLOAD Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit: BPF prog-id=22 op=LOAD Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6b9c3e60 a2=94 a3=54428f items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit: BPF prog-id=22 op=UNLOAD Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit: BPF prog-id=23 op=LOAD Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6b9c3e90 a2=94 a3=2 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit: BPF prog-id=23 op=UNLOAD Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc6b9c3d60 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc6b9c3d90 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc6b9c3ca0 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc6b9c3db0 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc6b9c3d90 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc6b9c3d80 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc6b9c3db0 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc6b9c3d90 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc6b9c3db0 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc6b9c3d80 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc6b9c3df0 a2=28 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit: BPF prog-id=24 op=LOAD Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6b9c3c60 a2=94 a3=0 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.377000 audit: BPF prog-id=24 op=UNLOAD Sep 6 01:01:16.377000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.377000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc6b9c3c50 a2=50 a3=2800 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc6b9c3c50 a2=50 a3=2800 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.378000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit: BPF prog-id=25 op=LOAD Sep 6 01:01:16.378000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6b9c3470 a2=94 a3=2 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.378000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.378000 audit: BPF prog-id=25 op=UNLOAD Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { perfmon } for pid=4601 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit[4601]: AVC avc: denied { bpf } for pid=4601 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.378000 audit: BPF prog-id=26 op=LOAD Sep 6 01:01:16.378000 audit[4601]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6b9c3570 a2=94 a3=30 items=0 ppid=4342 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.378000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit: BPF prog-id=27 op=LOAD Sep 6 01:01:16.379000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed6315960 a2=98 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.379000 audit: BPF prog-id=27 op=UNLOAD Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit: BPF prog-id=28 op=LOAD Sep 6 01:01:16.379000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffed6315750 a2=94 a3=54428f items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.379000 audit: BPF prog-id=28 op=UNLOAD Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.379000 audit: BPF prog-id=29 op=LOAD Sep 6 01:01:16.379000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffed6315780 a2=94 a3=2 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.379000 audit: BPF prog-id=29 op=UNLOAD Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit: BPF prog-id=30 op=LOAD Sep 6 01:01:16.466000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffed6315640 a2=94 a3=1 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.466000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.466000 audit: BPF prog-id=30 op=UNLOAD Sep 6 01:01:16.466000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.466000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffed6315710 a2=50 a3=7ffed63157f0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.466000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffed6315650 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed6315680 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed6315590 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffed63156a0 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffed6315680 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffed6315670 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffed63156a0 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed6315680 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed63156a0 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffed6315670 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffed63156e0 a2=28 a3=0 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffed6315490 a2=50 a3=1 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit: BPF prog-id=31 op=LOAD Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffed6315490 a2=94 a3=5 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit: BPF prog-id=31 op=UNLOAD Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffed6315540 a2=50 a3=1 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffed6315660 a2=4 a3=38 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { confidentiality } for pid=4605 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffed63156b0 a2=94 a3=6 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { confidentiality } for pid=4605 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffed6314e60 a2=94 a3=88 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { perfmon } for pid=4605 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.473000 audit[4605]: AVC avc: denied { confidentiality } for pid=4605 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 6 01:01:16.473000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffed6314e60 a2=94 a3=88 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.474000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.474000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffed6316890 a2=10 a3=f8f00800 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.474000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.474000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.474000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffed6316730 a2=10 a3=3 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.474000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.474000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.474000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffed63166d0 a2=10 a3=3 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.474000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.474000 audit[4605]: AVC avc: denied { bpf } for pid=4605 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 6 01:01:16.474000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffed63166d0 a2=10 a3=7 items=0 ppid=4342 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.474000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 6 01:01:16.487000 audit: BPF prog-id=26 op=UNLOAD Sep 6 01:01:16.521000 audit[4663]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=4663 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:16.521000 audit[4663]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe9d0df9e0 a2=0 a3=7ffe9d0df9cc items=0 ppid=4342 pid=4663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.521000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:16.524000 audit[4661]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=4661 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:16.524000 audit[4661]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff4875f860 a2=0 a3=7fff4875f84c items=0 ppid=4342 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.524000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:16.528000 audit[4662]: NETFILTER_CFG table=raw:103 family=2 entries=21 op=nft_register_chain pid=4662 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:16.528000 audit[4662]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffceff1f4a0 a2=0 a3=7ffceff1f48c items=0 ppid=4342 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.528000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:16.531000 audit[4666]: NETFILTER_CFG table=filter:104 family=2 entries=94 op=nft_register_chain pid=4666 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:16.531000 audit[4666]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe42071df0 a2=0 a3=7ffe42071ddc items=0 ppid=4342 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:16.531000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:17.308859 kubelet[2664]: I0906 01:01:17.308792 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a8074e-43ab-4946-8041-a4dd84c1e0ab" path="/var/lib/kubelet/pods/d0a8074e-43ab-4946-8041-a4dd84c1e0ab/volumes" Sep 6 01:01:17.702698 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Sep 6 01:01:18.214687 systemd-networkd[1387]: cali3efab6d8328: Gained IPv6LL Sep 6 01:01:19.305878 env[1665]: time="2025-09-06T01:01:19.305779971Z" level=info msg="StopPodSandbox for \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\"" Sep 6 01:01:19.306867 env[1665]: time="2025-09-06T01:01:19.306023672Z" level=info msg="StopPodSandbox for \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\"" Sep 6 01:01:19.306867 env[1665]: time="2025-09-06T01:01:19.306512959Z" level=info msg="StopPodSandbox for \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\"" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.345 [INFO][4714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.345 [INFO][4714] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" iface="eth0" netns="/var/run/netns/cni-2739195c-24d7-d767-8f70-47273a95f2ec" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.346 [INFO][4714] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" iface="eth0" netns="/var/run/netns/cni-2739195c-24d7-d767-8f70-47273a95f2ec" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.346 [INFO][4714] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" iface="eth0" netns="/var/run/netns/cni-2739195c-24d7-d767-8f70-47273a95f2ec" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.346 [INFO][4714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.346 [INFO][4714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.356 [INFO][4759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" HandleID="k8s-pod-network.b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.356 [INFO][4759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.356 [INFO][4759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.360 [WARNING][4759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" HandleID="k8s-pod-network.b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.360 [INFO][4759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" HandleID="k8s-pod-network.b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.361 [INFO][4759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:19.362489 env[1665]: 2025-09-06 01:01:19.361 [INFO][4714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:19.362808 env[1665]: time="2025-09-06T01:01:19.362562448Z" level=info msg="TearDown network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\" successfully" Sep 6 01:01:19.362808 env[1665]: time="2025-09-06T01:01:19.362583073Z" level=info msg="StopPodSandbox for \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\" returns successfully" Sep 6 01:01:19.362965 env[1665]: time="2025-09-06T01:01:19.362951492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-p9tg6,Uid:785dbad3-bee5-4a35-b5c9-4f3a631bbb6f,Namespace:calico-system,Attempt:1,}" Sep 6 01:01:19.364453 systemd[1]: run-netns-cni\x2d2739195c\x2d24d7\x2dd767\x2d8f70\x2d47273a95f2ec.mount: Deactivated successfully. Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.344 [INFO][4713] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.344 [INFO][4713] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" iface="eth0" netns="/var/run/netns/cni-d81b3df7-238f-bb77-baa6-ac5e886ceb1b" Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.346 [INFO][4713] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" iface="eth0" netns="/var/run/netns/cni-d81b3df7-238f-bb77-baa6-ac5e886ceb1b" Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.346 [INFO][4713] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" iface="eth0" netns="/var/run/netns/cni-d81b3df7-238f-bb77-baa6-ac5e886ceb1b" Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.346 [INFO][4713] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.346 [INFO][4713] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.356 [INFO][4757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" HandleID="k8s-pod-network.f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.356 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.361 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.364 [WARNING][4757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" HandleID="k8s-pod-network.f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.364 [INFO][4757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" HandleID="k8s-pod-network.f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.365 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:19.366835 env[1665]: 2025-09-06 01:01:19.366 [INFO][4713] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:19.367125 env[1665]: time="2025-09-06T01:01:19.366892218Z" level=info msg="TearDown network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\" successfully" Sep 6 01:01:19.367125 env[1665]: time="2025-09-06T01:01:19.366907834Z" level=info msg="StopPodSandbox for \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\" returns successfully" Sep 6 01:01:19.367231 env[1665]: time="2025-09-06T01:01:19.367218340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twvsq,Uid:993f1b5f-0640-48b4-ae84-9939b8fffa94,Namespace:calico-system,Attempt:1,}" Sep 6 01:01:19.370788 systemd[1]: run-netns-cni\x2dd81b3df7\x2d238f\x2dbb77\x2dbaa6\x2dac5e886ceb1b.mount: Deactivated successfully. Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.346 [INFO][4712] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.346 [INFO][4712] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" iface="eth0" netns="/var/run/netns/cni-49f8d75f-98cc-2e47-6e9c-736ad58d75a5" Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.346 [INFO][4712] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" iface="eth0" netns="/var/run/netns/cni-49f8d75f-98cc-2e47-6e9c-736ad58d75a5" Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.346 [INFO][4712] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" iface="eth0" netns="/var/run/netns/cni-49f8d75f-98cc-2e47-6e9c-736ad58d75a5" Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.346 [INFO][4712] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.346 [INFO][4712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.356 [INFO][4761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" HandleID="k8s-pod-network.55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.356 [INFO][4761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.365 [INFO][4761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.368 [WARNING][4761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" HandleID="k8s-pod-network.55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.368 [INFO][4761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" HandleID="k8s-pod-network.55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.370 [INFO][4761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:19.374316 env[1665]: 2025-09-06 01:01:19.370 [INFO][4712] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:19.374632 env[1665]: time="2025-09-06T01:01:19.374384153Z" level=info msg="TearDown network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\" successfully" Sep 6 01:01:19.374632 env[1665]: time="2025-09-06T01:01:19.374403184Z" level=info msg="StopPodSandbox for \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\" returns successfully" Sep 6 01:01:19.374841 env[1665]: time="2025-09-06T01:01:19.374823173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d97c87d7-rc6mk,Uid:b315d0a3-5dae-40db-b478-5f6cd0b453cc,Namespace:calico-system,Attempt:1,}" Sep 6 01:01:19.377770 systemd[1]: run-netns-cni\x2d49f8d75f\x2d98cc\x2d2e47\x2d6e9c\x2d736ad58d75a5.mount: Deactivated successfully. Sep 6 01:01:19.422004 systemd-networkd[1387]: cali230d44a1db3: Link UP Sep 6 01:01:19.475485 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:01:19.475517 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali230d44a1db3: link becomes ready Sep 6 01:01:19.475551 systemd-networkd[1387]: cali230d44a1db3: Gained carrier Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.384 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0 goldmane-7988f88666- calico-system 785dbad3-bee5-4a35-b5c9-4f3a631bbb6f 930 0 2025-09-06 01:00:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-4cc2a8c2f2 goldmane-7988f88666-p9tg6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali230d44a1db3 [] [] }} ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Namespace="calico-system" Pod="goldmane-7988f88666-p9tg6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.384 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Namespace="calico-system" Pod="goldmane-7988f88666-p9tg6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.399 [INFO][4861] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" HandleID="k8s-pod-network.0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.399 [INFO][4861] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" HandleID="k8s-pod-network.0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001396a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-4cc2a8c2f2", "pod":"goldmane-7988f88666-p9tg6", "timestamp":"2025-09-06 01:01:19.399247207 +0000 UTC"}, Hostname:"ci-3510.3.8-n-4cc2a8c2f2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.399 [INFO][4861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.399 [INFO][4861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.399 [INFO][4861] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-4cc2a8c2f2' Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.403 [INFO][4861] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.408 [INFO][4861] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.411 [INFO][4861] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.412 [INFO][4861] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.413 [INFO][4861] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.413 [INFO][4861] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.415 [INFO][4861] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.417 [INFO][4861] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.420 [INFO][4861] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.2/26] block=192.168.41.0/26 handle="k8s-pod-network.0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.420 [INFO][4861] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.2/26] handle="k8s-pod-network.0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.420 [INFO][4861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:19.482587 env[1665]: 2025-09-06 01:01:19.420 [INFO][4861] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.2/26] IPv6=[] ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" HandleID="k8s-pod-network.0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.483073 env[1665]: 2025-09-06 01:01:19.421 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Namespace="calico-system" Pod="goldmane-7988f88666-p9tg6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"", Pod:"goldmane-7988f88666-p9tg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali230d44a1db3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:19.483073 env[1665]: 2025-09-06 01:01:19.421 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.2/32] ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Namespace="calico-system" Pod="goldmane-7988f88666-p9tg6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.483073 env[1665]: 2025-09-06 01:01:19.421 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali230d44a1db3 ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Namespace="calico-system" Pod="goldmane-7988f88666-p9tg6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.483073 env[1665]: 2025-09-06 01:01:19.475 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Namespace="calico-system" Pod="goldmane-7988f88666-p9tg6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.483073 env[1665]: 2025-09-06 01:01:19.475 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Namespace="calico-system" Pod="goldmane-7988f88666-p9tg6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f", Pod:"goldmane-7988f88666-p9tg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali230d44a1db3", MAC:"1e:05:cb:59:16:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:19.483073 env[1665]: 2025-09-06 01:01:19.481 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f" Namespace="calico-system" Pod="goldmane-7988f88666-p9tg6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:19.486785 env[1665]: time="2025-09-06T01:01:19.486750490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:01:19.486785 env[1665]: time="2025-09-06T01:01:19.486771933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:01:19.486785 env[1665]: time="2025-09-06T01:01:19.486778937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:01:19.486894 env[1665]: time="2025-09-06T01:01:19.486844649Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f pid=4926 runtime=io.containerd.runc.v2 Sep 6 01:01:19.488000 audit[4936]: NETFILTER_CFG table=filter:105 family=2 entries=44 op=nft_register_chain pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:19.488000 audit[4936]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7fff9b5c84f0 a2=0 a3=7fff9b5c84dc items=0 ppid=4342 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:19.488000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:19.514468 env[1665]: time="2025-09-06T01:01:19.514440895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-p9tg6,Uid:785dbad3-bee5-4a35-b5c9-4f3a631bbb6f,Namespace:calico-system,Attempt:1,} returns sandbox id \"0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f\"" Sep 6 01:01:19.524561 systemd-networkd[1387]: calibd87c8c934e: Link UP Sep 6 01:01:19.551426 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibd87c8c934e: link becomes ready Sep 6 01:01:19.551457 systemd-networkd[1387]: calibd87c8c934e: Gained carrier Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.388 [INFO][4809] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0 csi-node-driver- calico-system 993f1b5f-0640-48b4-ae84-9939b8fffa94 929 0 2025-09-06 01:00:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-4cc2a8c2f2 csi-node-driver-twvsq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibd87c8c934e [] [] }} ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Namespace="calico-system" Pod="csi-node-driver-twvsq" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.388 [INFO][4809] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Namespace="calico-system" Pod="csi-node-driver-twvsq" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.401 [INFO][4870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" HandleID="k8s-pod-network.7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.401 [INFO][4870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" HandleID="k8s-pod-network.7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000526b20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-4cc2a8c2f2", "pod":"csi-node-driver-twvsq", "timestamp":"2025-09-06 01:01:19.401061576 +0000 UTC"}, Hostname:"ci-3510.3.8-n-4cc2a8c2f2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.401 [INFO][4870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.420 [INFO][4870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.420 [INFO][4870] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-4cc2a8c2f2' Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.505 [INFO][4870] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.509 [INFO][4870] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.512 [INFO][4870] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.514 [INFO][4870] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.515 [INFO][4870] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.515 [INFO][4870] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.516 [INFO][4870] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6 Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.519 [INFO][4870] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.522 [INFO][4870] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.3/26] block=192.168.41.0/26 handle="k8s-pod-network.7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.522 [INFO][4870] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.3/26] handle="k8s-pod-network.7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.522 [INFO][4870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:19.559347 env[1665]: 2025-09-06 01:01:19.522 [INFO][4870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.3/26] IPv6=[] ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" HandleID="k8s-pod-network.7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.559809 env[1665]: 2025-09-06 01:01:19.523 [INFO][4809] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Namespace="calico-system" Pod="csi-node-driver-twvsq" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"993f1b5f-0640-48b4-ae84-9939b8fffa94", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"", Pod:"csi-node-driver-twvsq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd87c8c934e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:19.559809 env[1665]: 2025-09-06 01:01:19.523 [INFO][4809] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.3/32] ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Namespace="calico-system" Pod="csi-node-driver-twvsq" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.559809 env[1665]: 2025-09-06 01:01:19.523 [INFO][4809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd87c8c934e ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Namespace="calico-system" Pod="csi-node-driver-twvsq" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.559809 env[1665]: 2025-09-06 01:01:19.551 [INFO][4809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Namespace="calico-system" Pod="csi-node-driver-twvsq" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.559809 env[1665]: 2025-09-06 01:01:19.551 [INFO][4809] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Namespace="calico-system" Pod="csi-node-driver-twvsq" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"993f1b5f-0640-48b4-ae84-9939b8fffa94", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6", Pod:"csi-node-driver-twvsq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd87c8c934e", MAC:"f2:10:e1:ce:4d:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:19.559809 env[1665]: 2025-09-06 01:01:19.558 [INFO][4809] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6" Namespace="calico-system" Pod="csi-node-driver-twvsq" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:19.563557 env[1665]: time="2025-09-06T01:01:19.563522235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:01:19.563557 env[1665]: time="2025-09-06T01:01:19.563545021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:01:19.563654 env[1665]: time="2025-09-06T01:01:19.563555752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:01:19.563654 env[1665]: time="2025-09-06T01:01:19.563632528Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6 pid=4978 runtime=io.containerd.runc.v2 Sep 6 01:01:19.564000 audit[4989]: NETFILTER_CFG table=filter:106 family=2 entries=40 op=nft_register_chain pid=4989 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:19.564000 audit[4989]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffdb7332460 a2=0 a3=7ffdb733244c items=0 ppid=4342 pid=4989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:19.564000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:19.578833 env[1665]: time="2025-09-06T01:01:19.578809049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twvsq,Uid:993f1b5f-0640-48b4-ae84-9939b8fffa94,Namespace:calico-system,Attempt:1,} returns sandbox id \"7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6\"" Sep 6 01:01:19.687577 systemd-networkd[1387]: calie5716c67125: Link UP Sep 6 01:01:19.720026 systemd-networkd[1387]: calie5716c67125: Gained carrier Sep 6 01:01:19.720477 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie5716c67125: link becomes ready Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.395 [INFO][4832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0 calico-kube-controllers-54d97c87d7- calico-system b315d0a3-5dae-40db-b478-5f6cd0b453cc 931 0 2025-09-06 01:00:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54d97c87d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-4cc2a8c2f2 calico-kube-controllers-54d97c87d7-rc6mk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie5716c67125 [] [] }} ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Namespace="calico-system" Pod="calico-kube-controllers-54d97c87d7-rc6mk" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.395 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Namespace="calico-system" Pod="calico-kube-controllers-54d97c87d7-rc6mk" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.408 [INFO][4891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" HandleID="k8s-pod-network.1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.408 [INFO][4891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" HandleID="k8s-pod-network.1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-4cc2a8c2f2", "pod":"calico-kube-controllers-54d97c87d7-rc6mk", "timestamp":"2025-09-06 01:01:19.408384939 +0000 UTC"}, Hostname:"ci-3510.3.8-n-4cc2a8c2f2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.408 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.522 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.522 [INFO][4891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-4cc2a8c2f2' Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.607 [INFO][4891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.638 [INFO][4891] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.647 [INFO][4891] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.651 [INFO][4891] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.656 [INFO][4891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.656 [INFO][4891] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.659 [INFO][4891] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.667 [INFO][4891] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.678 [INFO][4891] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.4/26] block=192.168.41.0/26 handle="k8s-pod-network.1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.679 [INFO][4891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.4/26] handle="k8s-pod-network.1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.679 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:19.728787 env[1665]: 2025-09-06 01:01:19.679 [INFO][4891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.4/26] IPv6=[] ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" HandleID="k8s-pod-network.1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.729374 env[1665]: 2025-09-06 01:01:19.683 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Namespace="calico-system" Pod="calico-kube-controllers-54d97c87d7-rc6mk" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0", GenerateName:"calico-kube-controllers-54d97c87d7-", Namespace:"calico-system", SelfLink:"", UID:"b315d0a3-5dae-40db-b478-5f6cd0b453cc", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d97c87d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"", Pod:"calico-kube-controllers-54d97c87d7-rc6mk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5716c67125", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:19.729374 env[1665]: 2025-09-06 01:01:19.683 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.4/32] ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Namespace="calico-system" Pod="calico-kube-controllers-54d97c87d7-rc6mk" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.729374 env[1665]: 2025-09-06 01:01:19.683 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5716c67125 ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Namespace="calico-system" Pod="calico-kube-controllers-54d97c87d7-rc6mk" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.729374 env[1665]: 2025-09-06 01:01:19.720 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Namespace="calico-system" Pod="calico-kube-controllers-54d97c87d7-rc6mk" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.729374 env[1665]: 2025-09-06 01:01:19.720 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Namespace="calico-system" Pod="calico-kube-controllers-54d97c87d7-rc6mk" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0", GenerateName:"calico-kube-controllers-54d97c87d7-", Namespace:"calico-system", SelfLink:"", UID:"b315d0a3-5dae-40db-b478-5f6cd0b453cc", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d97c87d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f", Pod:"calico-kube-controllers-54d97c87d7-rc6mk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5716c67125", MAC:"ca:a9:f1:1e:39:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:19.729374 env[1665]: 2025-09-06 01:01:19.727 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f" Namespace="calico-system" Pod="calico-kube-controllers-54d97c87d7-rc6mk" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:19.734785 env[1665]: time="2025-09-06T01:01:19.734740438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:01:19.734785 env[1665]: time="2025-09-06T01:01:19.734767908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:01:19.734785 env[1665]: time="2025-09-06T01:01:19.734777678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:01:19.734945 env[1665]: time="2025-09-06T01:01:19.734860747Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f pid=5031 runtime=io.containerd.runc.v2 Sep 6 01:01:19.736000 audit[5040]: NETFILTER_CFG table=filter:107 family=2 entries=44 op=nft_register_chain pid=5040 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:19.736000 audit[5040]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7fff198389e0 a2=0 a3=7fff198389cc items=0 ppid=4342 pid=5040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:19.736000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:19.772502 env[1665]: time="2025-09-06T01:01:19.772449881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d97c87d7-rc6mk,Uid:b315d0a3-5dae-40db-b478-5f6cd0b453cc,Namespace:calico-system,Attempt:1,} returns sandbox id \"1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f\"" Sep 6 01:01:20.518613 systemd-networkd[1387]: cali230d44a1db3: Gained IPv6LL Sep 6 01:01:21.306166 env[1665]: time="2025-09-06T01:01:21.306029523Z" level=info msg="StopPodSandbox for \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\"" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.340 [INFO][5077] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.340 [INFO][5077] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" iface="eth0" netns="/var/run/netns/cni-dd982914-dcbf-74b0-622f-fd22a4ad4f5a" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.340 [INFO][5077] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" iface="eth0" netns="/var/run/netns/cni-dd982914-dcbf-74b0-622f-fd22a4ad4f5a" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.340 [INFO][5077] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" iface="eth0" netns="/var/run/netns/cni-dd982914-dcbf-74b0-622f-fd22a4ad4f5a" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.340 [INFO][5077] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.340 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.350 [INFO][5095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" HandleID="k8s-pod-network.4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.350 [INFO][5095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.350 [INFO][5095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.354 [WARNING][5095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" HandleID="k8s-pod-network.4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.354 [INFO][5095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" HandleID="k8s-pod-network.4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.355 [INFO][5095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:21.357648 env[1665]: 2025-09-06 01:01:21.356 [INFO][5077] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:21.358052 env[1665]: time="2025-09-06T01:01:21.357777718Z" level=info msg="TearDown network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\" successfully" Sep 6 01:01:21.358052 env[1665]: time="2025-09-06T01:01:21.357797254Z" level=info msg="StopPodSandbox for \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\" returns successfully" Sep 6 01:01:21.358229 env[1665]: time="2025-09-06T01:01:21.358216916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff7d77c6-lds5m,Uid:e248ec61-097e-4508-8c68-e2d9a1c01f4b,Namespace:calico-apiserver,Attempt:1,}" Sep 6 01:01:21.359800 systemd[1]: run-netns-cni\x2ddd982914\x2ddcbf\x2d74b0\x2d622f\x2dfd22a4ad4f5a.mount: Deactivated successfully. Sep 6 01:01:21.450322 systemd-networkd[1387]: cali33a4badfbac: Link UP Sep 6 01:01:21.475431 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:01:21.475496 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali33a4badfbac: link becomes ready Sep 6 01:01:21.501457 systemd-networkd[1387]: cali33a4badfbac: Gained carrier Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.399 [INFO][5113] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0 calico-apiserver-6dff7d77c6- calico-apiserver e248ec61-097e-4508-8c68-e2d9a1c01f4b 949 0 2025-09-06 01:00:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dff7d77c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-4cc2a8c2f2 calico-apiserver-6dff7d77c6-lds5m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali33a4badfbac [] [] }} ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-lds5m" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.399 [INFO][5113] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-lds5m" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.419 [INFO][5132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" HandleID="k8s-pod-network.3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.419 [INFO][5132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" HandleID="k8s-pod-network.3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00020b6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-4cc2a8c2f2", "pod":"calico-apiserver-6dff7d77c6-lds5m", "timestamp":"2025-09-06 01:01:21.419117933 +0000 UTC"}, Hostname:"ci-3510.3.8-n-4cc2a8c2f2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.419 [INFO][5132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.419 [INFO][5132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.419 [INFO][5132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-4cc2a8c2f2' Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.425 [INFO][5132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.429 [INFO][5132] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.433 [INFO][5132] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.434 [INFO][5132] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.436 [INFO][5132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.436 [INFO][5132] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.438 [INFO][5132] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.441 [INFO][5132] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.446 [INFO][5132] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.5/26] block=192.168.41.0/26 handle="k8s-pod-network.3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.446 [INFO][5132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.5/26] handle="k8s-pod-network.3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.446 [INFO][5132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:21.508109 env[1665]: 2025-09-06 01:01:21.446 [INFO][5132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.5/26] IPv6=[] ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" HandleID="k8s-pod-network.3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.508549 env[1665]: 2025-09-06 01:01:21.448 [INFO][5113] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-lds5m" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0", GenerateName:"calico-apiserver-6dff7d77c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e248ec61-097e-4508-8c68-e2d9a1c01f4b", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff7d77c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"", Pod:"calico-apiserver-6dff7d77c6-lds5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a4badfbac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:21.508549 env[1665]: 2025-09-06 01:01:21.448 [INFO][5113] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.5/32] ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-lds5m" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.508549 env[1665]: 2025-09-06 01:01:21.448 [INFO][5113] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33a4badfbac ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-lds5m" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.508549 env[1665]: 2025-09-06 01:01:21.501 [INFO][5113] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-lds5m" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.508549 env[1665]: 2025-09-06 01:01:21.501 [INFO][5113] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-lds5m" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0", GenerateName:"calico-apiserver-6dff7d77c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e248ec61-097e-4508-8c68-e2d9a1c01f4b", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff7d77c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b", Pod:"calico-apiserver-6dff7d77c6-lds5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a4badfbac", MAC:"a6:7f:20:6a:e1:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:21.508549 env[1665]: 2025-09-06 01:01:21.507 [INFO][5113] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-lds5m" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:21.512806 env[1665]: time="2025-09-06T01:01:21.512773621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:01:21.512806 env[1665]: time="2025-09-06T01:01:21.512794950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:01:21.512806 env[1665]: time="2025-09-06T01:01:21.512801766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:01:21.512916 env[1665]: time="2025-09-06T01:01:21.512875134Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b pid=5160 runtime=io.containerd.runc.v2 Sep 6 01:01:21.514000 audit[5171]: NETFILTER_CFG table=filter:108 family=2 entries=68 op=nft_register_chain pid=5171 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:21.538445 kernel: kauditd_printk_skb: 564 callbacks suppressed Sep 6 01:01:21.538482 kernel: audit: type=1325 audit(1757120481.514:390): table=filter:108 family=2 entries=68 op=nft_register_chain pid=5171 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:21.542585 systemd-networkd[1387]: calie5716c67125: Gained IPv6LL Sep 6 01:01:21.514000 audit[5171]: SYSCALL arch=c000003e syscall=46 success=yes exit=34624 a0=3 a1=7ffcb6386510 a2=0 a3=7ffcb63864fc items=0 ppid=4342 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:21.607518 systemd-networkd[1387]: calibd87c8c934e: Gained IPv6LL Sep 6 01:01:21.678976 kernel: audit: type=1300 audit(1757120481.514:390): arch=c000003e syscall=46 success=yes exit=34624 a0=3 a1=7ffcb6386510 a2=0 a3=7ffcb63864fc items=0 ppid=4342 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:21.679022 kernel: audit: type=1327 audit(1757120481.514:390): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:21.514000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:21.751676 env[1665]: time="2025-09-06T01:01:21.751651948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff7d77c6-lds5m,Uid:e248ec61-097e-4508-8c68-e2d9a1c01f4b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b\"" Sep 6 01:01:22.305888 env[1665]: time="2025-09-06T01:01:22.305787493Z" level=info msg="StopPodSandbox for \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\"" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.416 [INFO][5214] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.416 [INFO][5214] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" iface="eth0" netns="/var/run/netns/cni-f11f4a83-01c4-fa79-69d3-c57284dbff6e" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.416 [INFO][5214] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" iface="eth0" netns="/var/run/netns/cni-f11f4a83-01c4-fa79-69d3-c57284dbff6e" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.417 [INFO][5214] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" iface="eth0" netns="/var/run/netns/cni-f11f4a83-01c4-fa79-69d3-c57284dbff6e" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.417 [INFO][5214] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.417 [INFO][5214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.431 [INFO][5229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" HandleID="k8s-pod-network.5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.431 [INFO][5229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.431 [INFO][5229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.435 [WARNING][5229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" HandleID="k8s-pod-network.5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.435 [INFO][5229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" HandleID="k8s-pod-network.5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.436 [INFO][5229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:22.438220 env[1665]: 2025-09-06 01:01:22.437 [INFO][5214] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:22.438671 env[1665]: time="2025-09-06T01:01:22.438276591Z" level=info msg="TearDown network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\" successfully" Sep 6 01:01:22.438671 env[1665]: time="2025-09-06T01:01:22.438299292Z" level=info msg="StopPodSandbox for \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\" returns successfully" Sep 6 01:01:22.438756 env[1665]: time="2025-09-06T01:01:22.438704998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8b5k6,Uid:ba5493ae-5339-4a9d-82a7-ea7e297cbb1f,Namespace:kube-system,Attempt:1,}" Sep 6 01:01:22.440147 systemd[1]: run-netns-cni\x2df11f4a83\x2d01c4\x2dfa79\x2d69d3\x2dc57284dbff6e.mount: Deactivated successfully. Sep 6 01:01:22.522892 systemd-networkd[1387]: calic965b7f4ca3: Link UP Sep 6 01:01:22.580495 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:01:22.580538 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic965b7f4ca3: link becomes ready Sep 6 01:01:22.580550 systemd-networkd[1387]: calic965b7f4ca3: Gained carrier Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.459 [INFO][5245] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0 coredns-7c65d6cfc9- kube-system ba5493ae-5339-4a9d-82a7-ea7e297cbb1f 958 0 2025-09-06 01:00:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-4cc2a8c2f2 coredns-7c65d6cfc9-8b5k6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic965b7f4ca3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8b5k6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.460 [INFO][5245] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8b5k6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.473 [INFO][5267] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" HandleID="k8s-pod-network.39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.473 [INFO][5267] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" HandleID="k8s-pod-network.39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-4cc2a8c2f2", "pod":"coredns-7c65d6cfc9-8b5k6", "timestamp":"2025-09-06 01:01:22.473482238 +0000 UTC"}, Hostname:"ci-3510.3.8-n-4cc2a8c2f2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.473 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.473 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.473 [INFO][5267] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-4cc2a8c2f2' Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.477 [INFO][5267] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.481 [INFO][5267] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.484 [INFO][5267] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.487 [INFO][5267] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.492 [INFO][5267] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.492 [INFO][5267] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.495 [INFO][5267] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51 Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.502 [INFO][5267] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.515 [INFO][5267] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.6/26] block=192.168.41.0/26 handle="k8s-pod-network.39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.515 [INFO][5267] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.6/26] handle="k8s-pod-network.39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.515 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:22.587778 env[1665]: 2025-09-06 01:01:22.515 [INFO][5267] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.6/26] IPv6=[] ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" HandleID="k8s-pod-network.39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.588246 env[1665]: 2025-09-06 01:01:22.519 [INFO][5245] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8b5k6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ba5493ae-5339-4a9d-82a7-ea7e297cbb1f", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"", Pod:"coredns-7c65d6cfc9-8b5k6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic965b7f4ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:22.588246 env[1665]: 2025-09-06 01:01:22.519 [INFO][5245] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.6/32] ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8b5k6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.588246 env[1665]: 2025-09-06 01:01:22.519 [INFO][5245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic965b7f4ca3 ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8b5k6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.588246 env[1665]: 2025-09-06 01:01:22.580 [INFO][5245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8b5k6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.588246 env[1665]: 2025-09-06 01:01:22.580 [INFO][5245] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8b5k6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ba5493ae-5339-4a9d-82a7-ea7e297cbb1f", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51", Pod:"coredns-7c65d6cfc9-8b5k6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic965b7f4ca3", MAC:"6a:53:d9:67:d3:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:22.588246 env[1665]: 2025-09-06 01:01:22.586 [INFO][5245] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8b5k6" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:22.593614 env[1665]: time="2025-09-06T01:01:22.593579073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:01:22.593614 env[1665]: time="2025-09-06T01:01:22.593602888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:01:22.593614 env[1665]: time="2025-09-06T01:01:22.593613425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:01:22.593742 env[1665]: time="2025-09-06T01:01:22.593693309Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51 pid=5304 runtime=io.containerd.runc.v2 Sep 6 01:01:22.594000 audit[5311]: NETFILTER_CFG table=filter:109 family=2 entries=54 op=nft_register_chain pid=5311 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:22.594000 audit[5311]: SYSCALL arch=c000003e syscall=46 success=yes exit=26100 a0=3 a1=7ffdd13946f0 a2=0 a3=7ffdd13946dc items=0 ppid=4342 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:22.733740 kernel: audit: type=1325 audit(1757120482.594:391): table=filter:109 family=2 entries=54 op=nft_register_chain pid=5311 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:22.733825 kernel: audit: type=1300 audit(1757120482.594:391): arch=c000003e syscall=46 success=yes exit=26100 a0=3 a1=7ffdd13946f0 a2=0 a3=7ffdd13946dc items=0 ppid=4342 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:22.594000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:22.788400 kernel: audit: type=1327 audit(1757120482.594:391): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:22.805998 env[1665]: time="2025-09-06T01:01:22.805972234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8b5k6,Uid:ba5493ae-5339-4a9d-82a7-ea7e297cbb1f,Namespace:kube-system,Attempt:1,} returns sandbox id \"39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51\"" Sep 6 01:01:22.807066 env[1665]: time="2025-09-06T01:01:22.807049063Z" level=info msg="CreateContainer within sandbox \"39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:01:22.840599 env[1665]: time="2025-09-06T01:01:22.840391377Z" level=info msg="CreateContainer within sandbox \"39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aca47985c87b0c5116a903158ca4bce4a2567eb2d9f93693d0b78f764d58f55a\"" Sep 6 01:01:22.841294 env[1665]: time="2025-09-06T01:01:22.841187603Z" level=info msg="StartContainer for \"aca47985c87b0c5116a903158ca4bce4a2567eb2d9f93693d0b78f764d58f55a\"" Sep 6 01:01:22.867035 env[1665]: time="2025-09-06T01:01:22.866984544Z" level=info msg="StartContainer for \"aca47985c87b0c5116a903158ca4bce4a2567eb2d9f93693d0b78f764d58f55a\" returns successfully" Sep 6 01:01:22.936350 env[1665]: time="2025-09-06T01:01:22.936294687Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:22.936915 env[1665]: time="2025-09-06T01:01:22.936902975Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:22.937564 env[1665]: time="2025-09-06T01:01:22.937519092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:22.938212 env[1665]: time="2025-09-06T01:01:22.938170493Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:22.938558 env[1665]: time="2025-09-06T01:01:22.938520200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 6 01:01:22.939115 env[1665]: time="2025-09-06T01:01:22.939091392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 6 01:01:22.939587 env[1665]: time="2025-09-06T01:01:22.939573001Z" level=info msg="CreateContainer within sandbox \"95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 6 01:01:22.943250 env[1665]: time="2025-09-06T01:01:22.943205120Z" level=info msg="CreateContainer within sandbox \"95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6d08320028ce3cfde350dddf5a47ddca5f0a635fb609ceeec5c40aebe5106506\"" Sep 6 01:01:22.943437 env[1665]: time="2025-09-06T01:01:22.943423044Z" level=info msg="StartContainer for \"6d08320028ce3cfde350dddf5a47ddca5f0a635fb609ceeec5c40aebe5106506\"" Sep 6 01:01:22.974633 env[1665]: time="2025-09-06T01:01:22.974608072Z" level=info msg="StartContainer for \"6d08320028ce3cfde350dddf5a47ddca5f0a635fb609ceeec5c40aebe5106506\" returns successfully" Sep 6 01:01:23.270716 systemd-networkd[1387]: cali33a4badfbac: Gained IPv6LL Sep 6 01:01:23.304878 env[1665]: time="2025-09-06T01:01:23.304855642Z" level=info msg="StopPodSandbox for \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\"" Sep 6 01:01:23.304969 env[1665]: time="2025-09-06T01:01:23.304906055Z" level=info msg="StopPodSandbox for \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\"" Sep 6 01:01:23.304969 env[1665]: time="2025-09-06T01:01:23.304912005Z" level=info msg="StopPodSandbox for \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\"" Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.322 [WARNING][5468] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0", GenerateName:"calico-kube-controllers-54d97c87d7-", Namespace:"calico-system", SelfLink:"", UID:"b315d0a3-5dae-40db-b478-5f6cd0b453cc", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d97c87d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f", Pod:"calico-kube-controllers-54d97c87d7-rc6mk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5716c67125", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.322 [INFO][5468] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.322 [INFO][5468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" iface="eth0" netns="" Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.322 [INFO][5468] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.322 [INFO][5468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.331 [INFO][5514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" HandleID="k8s-pod-network.55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.331 [INFO][5514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.331 [INFO][5514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.344 [WARNING][5514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" HandleID="k8s-pod-network.55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.344 [INFO][5514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" HandleID="k8s-pod-network.55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.346 [INFO][5514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.347369 env[1665]: 2025-09-06 01:01:23.346 [INFO][5468] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:23.347738 env[1665]: time="2025-09-06T01:01:23.347385274Z" level=info msg="TearDown network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\" successfully" Sep 6 01:01:23.347738 env[1665]: time="2025-09-06T01:01:23.347405466Z" level=info msg="StopPodSandbox for \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\" returns successfully" Sep 6 01:01:23.347738 env[1665]: time="2025-09-06T01:01:23.347702216Z" level=info msg="RemovePodSandbox for \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\"" Sep 6 01:01:23.347792 env[1665]: time="2025-09-06T01:01:23.347720748Z" level=info msg="Forcibly stopping sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\"" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.344 [INFO][5469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.344 [INFO][5469] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" iface="eth0" netns="/var/run/netns/cni-41f0dfcb-4888-0945-9602-baa5a90b3528" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.345 [INFO][5469] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" iface="eth0" netns="/var/run/netns/cni-41f0dfcb-4888-0945-9602-baa5a90b3528" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.345 [INFO][5469] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" iface="eth0" netns="/var/run/netns/cni-41f0dfcb-4888-0945-9602-baa5a90b3528" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.345 [INFO][5469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.345 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.354 [INFO][5538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" HandleID="k8s-pod-network.bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.354 [INFO][5538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.354 [INFO][5538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.358 [WARNING][5538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" HandleID="k8s-pod-network.bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.358 [INFO][5538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" HandleID="k8s-pod-network.bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.359 [INFO][5538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.361312 env[1665]: 2025-09-06 01:01:23.360 [INFO][5469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:01:23.361672 env[1665]: time="2025-09-06T01:01:23.361396380Z" level=info msg="TearDown network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\" successfully" Sep 6 01:01:23.361672 env[1665]: time="2025-09-06T01:01:23.361429467Z" level=info msg="StopPodSandbox for \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\" returns successfully" Sep 6 01:01:23.361806 env[1665]: time="2025-09-06T01:01:23.361792678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q529h,Uid:85744052-5f5c-49af-a21e-68c0336acf1a,Namespace:kube-system,Attempt:1,}" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.344 [INFO][5470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.344 [INFO][5470] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" iface="eth0" netns="/var/run/netns/cni-004736f5-f25e-710b-0af1-2b3debc49267" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.344 [INFO][5470] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" iface="eth0" netns="/var/run/netns/cni-004736f5-f25e-710b-0af1-2b3debc49267" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.344 [INFO][5470] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" iface="eth0" netns="/var/run/netns/cni-004736f5-f25e-710b-0af1-2b3debc49267" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.344 [INFO][5470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.344 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.354 [INFO][5530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" HandleID="k8s-pod-network.86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.354 [INFO][5530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.359 [INFO][5530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.364 [WARNING][5530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" HandleID="k8s-pod-network.86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.364 [INFO][5530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" HandleID="k8s-pod-network.86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.365 [INFO][5530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.366823 env[1665]: 2025-09-06 01:01:23.366 [INFO][5470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:01:23.367132 env[1665]: time="2025-09-06T01:01:23.366887096Z" level=info msg="TearDown network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\" successfully" Sep 6 01:01:23.367132 env[1665]: time="2025-09-06T01:01:23.366905132Z" level=info msg="StopPodSandbox for \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\" returns successfully" Sep 6 01:01:23.367291 env[1665]: time="2025-09-06T01:01:23.367277060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff7d77c6-jpzhj,Uid:ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff,Namespace:calico-apiserver,Attempt:1,}" Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.364 [WARNING][5559] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0", GenerateName:"calico-kube-controllers-54d97c87d7-", Namespace:"calico-system", SelfLink:"", UID:"b315d0a3-5dae-40db-b478-5f6cd0b453cc", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d97c87d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f", Pod:"calico-kube-controllers-54d97c87d7-rc6mk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5716c67125", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.365 [INFO][5559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.365 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" iface="eth0" netns="" Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.365 [INFO][5559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.365 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.375 [INFO][5597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" HandleID="k8s-pod-network.55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.375 [INFO][5597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.375 [INFO][5597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.379 [WARNING][5597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" HandleID="k8s-pod-network.55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.379 [INFO][5597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" HandleID="k8s-pod-network.55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--kube--controllers--54d97c87d7--rc6mk-eth0" Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.380 [INFO][5597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.381932 env[1665]: 2025-09-06 01:01:23.381 [INFO][5559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a" Sep 6 01:01:23.382242 env[1665]: time="2025-09-06T01:01:23.381950127Z" level=info msg="TearDown network for sandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\" successfully" Sep 6 01:01:23.383280 env[1665]: time="2025-09-06T01:01:23.383239010Z" level=info msg="RemovePodSandbox \"55301b5cdd7dd57f3adc99105d30ea214dde7f3a1f95a2b1dc758ce61d2a082a\" returns successfully" Sep 6 01:01:23.383585 env[1665]: time="2025-09-06T01:01:23.383546069Z" level=info msg="StopPodSandbox for \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\"" Sep 6 01:01:23.418467 systemd-networkd[1387]: cali778439a76dc: Link UP Sep 6 01:01:23.442093 systemd[1]: run-netns-cni\x2d004736f5\x2df25e\x2d710b\x2d0af1\x2d2b3debc49267.mount: Deactivated successfully. Sep 6 01:01:23.442160 systemd[1]: run-netns-cni\x2d41f0dfcb\x2d4888\x2d0945\x2d9602\x2dbaa5a90b3528.mount: Deactivated successfully. Sep 6 01:01:23.443867 systemd-networkd[1387]: cali778439a76dc: Gained carrier Sep 6 01:01:23.444424 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali778439a76dc: link becomes ready Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.383 [INFO][5602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0 coredns-7c65d6cfc9- kube-system 85744052-5f5c-49af-a21e-68c0336acf1a 974 0 2025-09-06 01:00:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-4cc2a8c2f2 coredns-7c65d6cfc9-q529h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali778439a76dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q529h" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.383 [INFO][5602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q529h" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.396 [INFO][5670] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" HandleID="k8s-pod-network.830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.396 [INFO][5670] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" HandleID="k8s-pod-network.830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139720), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-4cc2a8c2f2", "pod":"coredns-7c65d6cfc9-q529h", "timestamp":"2025-09-06 01:01:23.396231377 +0000 UTC"}, Hostname:"ci-3510.3.8-n-4cc2a8c2f2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.396 [INFO][5670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.396 [INFO][5670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.396 [INFO][5670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-4cc2a8c2f2' Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.400 [INFO][5670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.403 [INFO][5670] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.406 [INFO][5670] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.407 [INFO][5670] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.409 [INFO][5670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.409 [INFO][5670] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.410 [INFO][5670] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.412 [INFO][5670] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.416 [INFO][5670] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.7/26] block=192.168.41.0/26 handle="k8s-pod-network.830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.416 [INFO][5670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.7/26] handle="k8s-pod-network.830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.416 [INFO][5670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.450069 env[1665]: 2025-09-06 01:01:23.416 [INFO][5670] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.7/26] IPv6=[] ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" HandleID="k8s-pod-network.830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.450667 env[1665]: 2025-09-06 01:01:23.417 [INFO][5602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q529h" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"85744052-5f5c-49af-a21e-68c0336acf1a", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"", Pod:"coredns-7c65d6cfc9-q529h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali778439a76dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.450667 env[1665]: 2025-09-06 01:01:23.417 [INFO][5602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.7/32] ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q529h" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.450667 env[1665]: 2025-09-06 01:01:23.417 [INFO][5602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali778439a76dc ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q529h" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.450667 env[1665]: 2025-09-06 01:01:23.443 [INFO][5602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q529h" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.450667 env[1665]: 2025-09-06 01:01:23.443 [INFO][5602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q529h" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"85744052-5f5c-49af-a21e-68c0336acf1a", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d", Pod:"coredns-7c65d6cfc9-q529h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali778439a76dc", MAC:"ea:f3:ad:6f:b7:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.450667 env[1665]: 2025-09-06 01:01:23.449 [INFO][5602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q529h" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:01:23.456000 audit[5741]: NETFILTER_CFG table=filter:110 family=2 entries=48 op=nft_register_chain pid=5741 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:23.460481 env[1665]: time="2025-09-06T01:01:23.460452605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:01:23.460481 env[1665]: time="2025-09-06T01:01:23.460472636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:01:23.460593 env[1665]: time="2025-09-06T01:01:23.460481208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:01:23.460593 env[1665]: time="2025-09-06T01:01:23.460567373Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d pid=5749 runtime=io.containerd.runc.v2 Sep 6 01:01:23.488352 kubelet[2664]: I0906 01:01:23.488321 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8b5k6" podStartSLOduration=54.488311057 podStartE2EDuration="54.488311057s" podCreationTimestamp="2025-09-06 01:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:01:23.487864605 +0000 UTC m=+60.285601293" watchObservedRunningTime="2025-09-06 01:01:23.488311057 +0000 UTC m=+60.286047739" Sep 6 01:01:23.456000 audit[5741]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffde20d2590 a2=0 a3=7ffde20d257c items=0 ppid=4342 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:23.596499 kernel: audit: type=1325 audit(1757120483.456:392): table=filter:110 family=2 entries=48 op=nft_register_chain pid=5741 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:23.596569 kernel: audit: type=1300 audit(1757120483.456:392): arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffde20d2590 a2=0 a3=7ffde20d257c items=0 ppid=4342 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:23.596589 kernel: audit: type=1327 audit(1757120483.456:392): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:23.456000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:23.657000 audit[5777]: NETFILTER_CFG table=filter:111 family=2 entries=20 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:23.663884 systemd-networkd[1387]: cali2ba29c9d3d2: Link UP Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.400 [WARNING][5664] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0", GenerateName:"calico-apiserver-6dff7d77c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e248ec61-097e-4508-8c68-e2d9a1c01f4b", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff7d77c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b", Pod:"calico-apiserver-6dff7d77c6-lds5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a4badfbac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.401 [INFO][5664] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.401 [INFO][5664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" iface="eth0" netns="" Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.401 [INFO][5664] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.401 [INFO][5664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.411 [INFO][5711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" HandleID="k8s-pod-network.4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.411 [INFO][5711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.662 [INFO][5711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.666 [WARNING][5711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" HandleID="k8s-pod-network.4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.666 [INFO][5711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" HandleID="k8s-pod-network.4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.667 [INFO][5711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.670633 env[1665]: 2025-09-06 01:01:23.670 [INFO][5664] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:23.671092 env[1665]: time="2025-09-06T01:01:23.670648581Z" level=info msg="TearDown network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\" successfully" Sep 6 01:01:23.671092 env[1665]: time="2025-09-06T01:01:23.670670677Z" level=info msg="StopPodSandbox for \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\" returns successfully" Sep 6 01:01:23.671092 env[1665]: time="2025-09-06T01:01:23.670926352Z" level=info msg="RemovePodSandbox for \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\"" Sep 6 01:01:23.671092 env[1665]: time="2025-09-06T01:01:23.670948310Z" level=info msg="Forcibly stopping sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\"" Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.687 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0", GenerateName:"calico-apiserver-6dff7d77c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e248ec61-097e-4508-8c68-e2d9a1c01f4b", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff7d77c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b", Pod:"calico-apiserver-6dff7d77c6-lds5m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33a4badfbac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.687 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.687 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" iface="eth0" netns="" Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.687 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.687 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.697 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" HandleID="k8s-pod-network.4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.697 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.697 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.701 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" HandleID="k8s-pod-network.4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.701 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" HandleID="k8s-pod-network.4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--lds5m-eth0" Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.702 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.704108 env[1665]: 2025-09-06 01:01:23.703 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79" Sep 6 01:01:23.704669 env[1665]: time="2025-09-06T01:01:23.704120041Z" level=info msg="TearDown network for sandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\" successfully" Sep 6 01:01:23.657000 audit[5777]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffdcfdb520 a2=0 a3=7fffdcfdb50c items=0 ppid=2820 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:23.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:23.711424 kernel: audit: type=1325 audit(1757120483.657:393): table=filter:111 family=2 entries=20 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:23.711450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:01:23.739688 env[1665]: time="2025-09-06T01:01:23.739665997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q529h,Uid:85744052-5f5c-49af-a21e-68c0336acf1a,Namespace:kube-system,Attempt:1,} returns sandbox id \"830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d\"" Sep 6 01:01:23.740778 env[1665]: time="2025-09-06T01:01:23.740762841Z" level=info msg="CreateContainer within sandbox \"830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:01:23.759052 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2ba29c9d3d2: link becomes ready Sep 6 01:01:23.759368 systemd-networkd[1387]: cali2ba29c9d3d2: Gained carrier Sep 6 01:01:23.759743 env[1665]: time="2025-09-06T01:01:23.759695767Z" level=info msg="RemovePodSandbox \"4541eec933567df9d5b2079af3a060a3710d74f3fc1f5e394347a4719ff6ba79\" returns successfully" Sep 6 01:01:23.759950 env[1665]: time="2025-09-06T01:01:23.759891413Z" level=info msg="StopPodSandbox for \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\"" Sep 6 01:01:23.761000 audit[5777]: NETFILTER_CFG table=nat:112 family=2 entries=14 op=nft_register_rule pid=5777 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:23.761000 audit[5777]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffdcfdb520 a2=0 a3=0 items=0 ppid=2820 pid=5777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:23.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:23.762943 env[1665]: time="2025-09-06T01:01:23.762919443Z" level=info msg="CreateContainer within sandbox \"830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"febf1e34cc64ab2820566a2376e3e68d73680d38eb61fdf8f4264416b5e32a9b\"" Sep 6 01:01:23.763246 env[1665]: time="2025-09-06T01:01:23.763229513Z" level=info msg="StartContainer for \"febf1e34cc64ab2820566a2376e3e68d73680d38eb61fdf8f4264416b5e32a9b\"" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.386 [INFO][5620] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0 calico-apiserver-6dff7d77c6- calico-apiserver ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff 972 0 2025-09-06 01:00:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dff7d77c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-4cc2a8c2f2 calico-apiserver-6dff7d77c6-jpzhj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ba29c9d3d2 [] [] }} ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-jpzhj" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.386 [INFO][5620] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-jpzhj" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.399 [INFO][5679] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" HandleID="k8s-pod-network.77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.399 [INFO][5679] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" HandleID="k8s-pod-network.77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-4cc2a8c2f2", "pod":"calico-apiserver-6dff7d77c6-jpzhj", "timestamp":"2025-09-06 01:01:23.39920186 +0000 UTC"}, Hostname:"ci-3510.3.8-n-4cc2a8c2f2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.399 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.416 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.416 [INFO][5679] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-4cc2a8c2f2' Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.501 [INFO][5679] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.504 [INFO][5679] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.598 [INFO][5679] ipam/ipam.go 511: Trying affinity for 192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.651 [INFO][5679] ipam/ipam.go 158: Attempting to load block cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.653 [INFO][5679] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.653 [INFO][5679] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.654 [INFO][5679] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626 Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.657 [INFO][5679] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.662 [INFO][5679] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.41.8/26] block=192.168.41.0/26 handle="k8s-pod-network.77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.662 [INFO][5679] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.41.8/26] handle="k8s-pod-network.77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" host="ci-3510.3.8-n-4cc2a8c2f2" Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.662 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.767536 env[1665]: 2025-09-06 01:01:23.662 [INFO][5679] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.41.8/26] IPv6=[] ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" HandleID="k8s-pod-network.77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.768037 env[1665]: 2025-09-06 01:01:23.663 [INFO][5620] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-jpzhj" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0", GenerateName:"calico-apiserver-6dff7d77c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff7d77c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"", Pod:"calico-apiserver-6dff7d77c6-jpzhj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ba29c9d3d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.768037 env[1665]: 2025-09-06 01:01:23.663 [INFO][5620] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.41.8/32] ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-jpzhj" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.768037 env[1665]: 2025-09-06 01:01:23.663 [INFO][5620] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ba29c9d3d2 ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-jpzhj" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.768037 env[1665]: 2025-09-06 01:01:23.759 [INFO][5620] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-jpzhj" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.768037 env[1665]: 2025-09-06 01:01:23.759 [INFO][5620] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-jpzhj" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0", GenerateName:"calico-apiserver-6dff7d77c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff7d77c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626", Pod:"calico-apiserver-6dff7d77c6-jpzhj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ba29c9d3d2", MAC:"02:28:45:cf:b8:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.768037 env[1665]: 2025-09-06 01:01:23.766 [INFO][5620] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626" Namespace="calico-apiserver" Pod="calico-apiserver-6dff7d77c6-jpzhj" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:01:23.772512 env[1665]: time="2025-09-06T01:01:23.772468803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:01:23.772512 env[1665]: time="2025-09-06T01:01:23.772493422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:01:23.772512 env[1665]: time="2025-09-06T01:01:23.772500531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:01:23.772682 env[1665]: time="2025-09-06T01:01:23.772583079Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626 pid=5872 runtime=io.containerd.runc.v2 Sep 6 01:01:23.775000 audit[5894]: NETFILTER_CFG table=filter:113 family=2 entries=63 op=nft_register_chain pid=5894 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 6 01:01:23.775000 audit[5894]: SYSCALL arch=c000003e syscall=46 success=yes exit=30664 a0=3 a1=7fff8c7b6830 a2=0 a3=7fff8c7b681c items=0 ppid=4342 pid=5894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:23.775000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 6 01:01:23.787157 env[1665]: time="2025-09-06T01:01:23.787104334Z" level=info msg="StartContainer for \"febf1e34cc64ab2820566a2376e3e68d73680d38eb61fdf8f4264416b5e32a9b\" returns successfully" Sep 6 01:01:23.790000 audit[5898]: NETFILTER_CFG table=filter:114 family=2 entries=17 op=nft_register_rule pid=5898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:23.790000 audit[5898]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd4c8a5f70 a2=0 a3=7ffd4c8a5f5c items=0 ppid=2820 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:23.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:23.800000 audit[5898]: NETFILTER_CFG table=nat:115 family=2 entries=35 op=nft_register_chain pid=5898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:23.800000 audit[5898]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd4c8a5f70 a2=0 a3=7ffd4c8a5f5c items=0 ppid=2820 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:23.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:23.803128 env[1665]: time="2025-09-06T01:01:23.803098385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dff7d77c6-jpzhj,Uid:ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626\"" Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.780 [WARNING][5835] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ba5493ae-5339-4a9d-82a7-ea7e297cbb1f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51", Pod:"coredns-7c65d6cfc9-8b5k6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic965b7f4ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.780 [INFO][5835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.780 [INFO][5835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" iface="eth0" netns="" Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.780 [INFO][5835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.780 [INFO][5835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.791 [INFO][5908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" HandleID="k8s-pod-network.5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.791 [INFO][5908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.791 [INFO][5908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.809 [WARNING][5908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" HandleID="k8s-pod-network.5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.809 [INFO][5908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" HandleID="k8s-pod-network.5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.811 [INFO][5908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.812878 env[1665]: 2025-09-06 01:01:23.812 [INFO][5835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:23.812878 env[1665]: time="2025-09-06T01:01:23.812867064Z" level=info msg="TearDown network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\" successfully" Sep 6 01:01:23.813193 env[1665]: time="2025-09-06T01:01:23.812882172Z" level=info msg="StopPodSandbox for \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\" returns successfully" Sep 6 01:01:23.813193 env[1665]: time="2025-09-06T01:01:23.813091995Z" level=info msg="RemovePodSandbox for \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\"" Sep 6 01:01:23.813193 env[1665]: time="2025-09-06T01:01:23.813111836Z" level=info msg="Forcibly stopping sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\"" Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.829 [WARNING][5967] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ba5493ae-5339-4a9d-82a7-ea7e297cbb1f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"39f5f86410a43345d7d8a6a25703561660ec6bdda63ca3d9d7475e9fd7a58b51", Pod:"coredns-7c65d6cfc9-8b5k6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic965b7f4ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.830 [INFO][5967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.830 [INFO][5967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" iface="eth0" netns="" Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.830 [INFO][5967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.830 [INFO][5967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.839 [INFO][5981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" HandleID="k8s-pod-network.5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.839 [INFO][5981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.839 [INFO][5981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.843 [WARNING][5981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" HandleID="k8s-pod-network.5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.843 [INFO][5981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" HandleID="k8s-pod-network.5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--8b5k6-eth0" Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.844 [INFO][5981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.845797 env[1665]: 2025-09-06 01:01:23.844 [INFO][5967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833" Sep 6 01:01:23.845797 env[1665]: time="2025-09-06T01:01:23.845783682Z" level=info msg="TearDown network for sandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\" successfully" Sep 6 01:01:23.847206 env[1665]: time="2025-09-06T01:01:23.847142704Z" level=info msg="RemovePodSandbox \"5f20cdc371de2a5f50d3d6f87fec53e80b73d05ffbb6e1b300a51dad7c026833\" returns successfully" Sep 6 01:01:23.847472 env[1665]: time="2025-09-06T01:01:23.847410223Z" level=info msg="StopPodSandbox for \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\"" Sep 6 01:01:23.847478 systemd-networkd[1387]: calic965b7f4ca3: Gained IPv6LL Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.863 [WARNING][6006] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"993f1b5f-0640-48b4-ae84-9939b8fffa94", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6", Pod:"csi-node-driver-twvsq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd87c8c934e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.863 [INFO][6006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.863 [INFO][6006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" iface="eth0" netns="" Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.863 [INFO][6006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.863 [INFO][6006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.873 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" HandleID="k8s-pod-network.f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.873 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.873 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.877 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" HandleID="k8s-pod-network.f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.877 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" HandleID="k8s-pod-network.f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.878 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.880037 env[1665]: 2025-09-06 01:01:23.879 [INFO][6006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:23.880354 env[1665]: time="2025-09-06T01:01:23.880055842Z" level=info msg="TearDown network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\" successfully" Sep 6 01:01:23.880354 env[1665]: time="2025-09-06T01:01:23.880074821Z" level=info msg="StopPodSandbox for \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\" returns successfully" Sep 6 01:01:23.880354 env[1665]: time="2025-09-06T01:01:23.880322311Z" level=info msg="RemovePodSandbox for \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\"" Sep 6 01:01:23.880354 env[1665]: time="2025-09-06T01:01:23.880339659Z" level=info msg="Forcibly stopping sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\"" Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.898 [WARNING][6048] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"993f1b5f-0640-48b4-ae84-9939b8fffa94", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6", Pod:"csi-node-driver-twvsq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibd87c8c934e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.898 [INFO][6048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.898 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" iface="eth0" netns="" Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.898 [INFO][6048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.898 [INFO][6048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.911 [INFO][6063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" HandleID="k8s-pod-network.f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.911 [INFO][6063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.911 [INFO][6063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.916 [WARNING][6063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" HandleID="k8s-pod-network.f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.916 [INFO][6063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" HandleID="k8s-pod-network.f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-csi--node--driver--twvsq-eth0" Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.917 [INFO][6063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.919131 env[1665]: 2025-09-06 01:01:23.918 [INFO][6048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce" Sep 6 01:01:23.919554 env[1665]: time="2025-09-06T01:01:23.919148609Z" level=info msg="TearDown network for sandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\" successfully" Sep 6 01:01:23.920725 env[1665]: time="2025-09-06T01:01:23.920684997Z" level=info msg="RemovePodSandbox \"f50de4c83330a0da78d65eecfa541f4eb23fbc9a8d3f208db08b3592e37d14ce\" returns successfully" Sep 6 01:01:23.921019 env[1665]: time="2025-09-06T01:01:23.920979438Z" level=info msg="StopPodSandbox for \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\"" Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.944 [WARNING][6095] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f", Pod:"goldmane-7988f88666-p9tg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali230d44a1db3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.944 [INFO][6095] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.944 [INFO][6095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" iface="eth0" netns="" Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.944 [INFO][6095] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.944 [INFO][6095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.963 [INFO][6110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" HandleID="k8s-pod-network.b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.964 [INFO][6110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.964 [INFO][6110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.970 [WARNING][6110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" HandleID="k8s-pod-network.b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.970 [INFO][6110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" HandleID="k8s-pod-network.b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.972 [INFO][6110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:23.975199 env[1665]: 2025-09-06 01:01:23.973 [INFO][6095] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:23.975809 env[1665]: time="2025-09-06T01:01:23.975207090Z" level=info msg="TearDown network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\" successfully" Sep 6 01:01:23.975809 env[1665]: time="2025-09-06T01:01:23.975237100Z" level=info msg="StopPodSandbox for \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\" returns successfully" Sep 6 01:01:23.975809 env[1665]: time="2025-09-06T01:01:23.975632365Z" level=info msg="RemovePodSandbox for \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\"" Sep 6 01:01:23.975809 env[1665]: time="2025-09-06T01:01:23.975665039Z" level=info msg="Forcibly stopping sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\"" Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.006 [WARNING][6136] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"785dbad3-bee5-4a35-b5c9-4f3a631bbb6f", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f", Pod:"goldmane-7988f88666-p9tg6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali230d44a1db3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.007 [INFO][6136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.007 [INFO][6136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" iface="eth0" netns="" Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.007 [INFO][6136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.007 [INFO][6136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.027 [INFO][6154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" HandleID="k8s-pod-network.b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.027 [INFO][6154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.027 [INFO][6154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.034 [WARNING][6154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" HandleID="k8s-pod-network.b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.034 [INFO][6154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" HandleID="k8s-pod-network.b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-goldmane--7988f88666--p9tg6-eth0" Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.036 [INFO][6154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:24.040296 env[1665]: 2025-09-06 01:01:24.038 [INFO][6136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff" Sep 6 01:01:24.041314 env[1665]: time="2025-09-06T01:01:24.040333370Z" level=info msg="TearDown network for sandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\" successfully" Sep 6 01:01:24.043970 env[1665]: time="2025-09-06T01:01:24.043890063Z" level=info msg="RemovePodSandbox \"b71e7e854fb62be7c719dda2789fa46344651a0b2e35efb8efec9d0ab65610ff\" returns successfully" Sep 6 01:01:24.044574 env[1665]: time="2025-09-06T01:01:24.044527348Z" level=info msg="StopPodSandbox for \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\"" Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.097 [WARNING][6178] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.097 [INFO][6178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.097 [INFO][6178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" iface="eth0" netns="" Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.097 [INFO][6178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.097 [INFO][6178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.130 [INFO][6196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" HandleID="k8s-pod-network.8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.130 [INFO][6196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.130 [INFO][6196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.138 [WARNING][6196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" HandleID="k8s-pod-network.8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.138 [INFO][6196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" HandleID="k8s-pod-network.8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.140 [INFO][6196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:24.144246 env[1665]: 2025-09-06 01:01:24.142 [INFO][6178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:24.144246 env[1665]: time="2025-09-06T01:01:24.144206179Z" level=info msg="TearDown network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\" successfully" Sep 6 01:01:24.144246 env[1665]: time="2025-09-06T01:01:24.144243557Z" level=info msg="StopPodSandbox for \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\" returns successfully" Sep 6 01:01:24.145148 env[1665]: time="2025-09-06T01:01:24.144764364Z" level=info msg="RemovePodSandbox for \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\"" Sep 6 01:01:24.145148 env[1665]: time="2025-09-06T01:01:24.144807256Z" level=info msg="Forcibly stopping sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\"" Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.184 [WARNING][6224] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" WorkloadEndpoint="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.185 [INFO][6224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.185 [INFO][6224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" iface="eth0" netns="" Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.185 [INFO][6224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.185 [INFO][6224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.209 [INFO][6244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" HandleID="k8s-pod-network.8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.209 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.209 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.218 [WARNING][6244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" HandleID="k8s-pod-network.8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.218 [INFO][6244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" HandleID="k8s-pod-network.8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-whisker--6fb65f8bb9--d7c4l-eth0" Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.219 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:01:24.223353 env[1665]: 2025-09-06 01:01:24.221 [INFO][6224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b" Sep 6 01:01:24.224057 env[1665]: time="2025-09-06T01:01:24.223356591Z" level=info msg="TearDown network for sandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\" successfully" Sep 6 01:01:24.225980 env[1665]: time="2025-09-06T01:01:24.225946489Z" level=info msg="RemovePodSandbox \"8d1cd822956d6b571b8e2df9f3d88a21cb119ad972c6ed575f3a88d62e3e0d9b\" returns successfully" Sep 6 01:01:24.486654 systemd-networkd[1387]: cali778439a76dc: Gained IPv6LL Sep 6 01:01:24.509819 kubelet[2664]: I0906 01:01:24.509690 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q529h" podStartSLOduration=55.509640055 podStartE2EDuration="55.509640055s" podCreationTimestamp="2025-09-06 01:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:01:24.508595857 +0000 UTC m=+61.306332615" watchObservedRunningTime="2025-09-06 01:01:24.509640055 +0000 UTC m=+61.307376790" Sep 6 01:01:24.525000 audit[6263]: NETFILTER_CFG table=filter:116 family=2 entries=14 op=nft_register_rule pid=6263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:24.525000 audit[6263]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5222c100 a2=0 a3=7fff5222c0ec items=0 ppid=2820 pid=6263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:24.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:24.534000 audit[6263]: NETFILTER_CFG table=nat:117 family=2 entries=44 op=nft_register_rule pid=6263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:24.534000 audit[6263]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff5222c100 a2=0 a3=7fff5222c0ec items=0 ppid=2820 pid=6263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:24.534000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:24.934623 systemd-networkd[1387]: cali2ba29c9d3d2: Gained IPv6LL Sep 6 01:01:25.447513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269456948.mount: Deactivated successfully. Sep 6 01:01:25.550000 audit[6265]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=6265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:25.550000 audit[6265]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdfe2cc170 a2=0 a3=7ffdfe2cc15c items=0 ppid=2820 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:25.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:25.560000 audit[6265]: NETFILTER_CFG table=nat:119 family=2 entries=56 op=nft_register_chain pid=6265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:25.560000 audit[6265]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdfe2cc170 a2=0 a3=7ffdfe2cc15c items=0 ppid=2820 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:25.560000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:25.896309 env[1665]: time="2025-09-06T01:01:25.896219588Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:25.896876 env[1665]: time="2025-09-06T01:01:25.896834690Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:25.897683 env[1665]: time="2025-09-06T01:01:25.897632001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:25.898722 env[1665]: time="2025-09-06T01:01:25.898686272Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:25.899001 env[1665]: time="2025-09-06T01:01:25.898958010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 6 01:01:25.900389 env[1665]: time="2025-09-06T01:01:25.900372078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 6 01:01:25.900994 env[1665]: time="2025-09-06T01:01:25.900921748Z" level=info msg="CreateContainer within sandbox \"0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 6 01:01:25.904588 env[1665]: time="2025-09-06T01:01:25.904546686Z" level=info msg="CreateContainer within sandbox \"0f5b2f6b54d3d0b7f846576e95e2c515ad7ce950e1ad9274e4c7b263eda95b7f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"94a9e441a8a01f58871cc298bb12f26882c369ea2ef65645db9df9c5cec569d8\"" Sep 6 01:01:25.904881 env[1665]: time="2025-09-06T01:01:25.904847979Z" level=info msg="StartContainer for \"94a9e441a8a01f58871cc298bb12f26882c369ea2ef65645db9df9c5cec569d8\"" Sep 6 01:01:25.938762 env[1665]: time="2025-09-06T01:01:25.938704412Z" level=info msg="StartContainer for \"94a9e441a8a01f58871cc298bb12f26882c369ea2ef65645db9df9c5cec569d8\" returns successfully" Sep 6 01:01:26.520574 kubelet[2664]: I0906 01:01:26.520533 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-p9tg6" podStartSLOduration=41.13529565 podStartE2EDuration="47.520520637s" podCreationTimestamp="2025-09-06 01:00:39 +0000 UTC" firstStartedPulling="2025-09-06 01:01:19.515013096 +0000 UTC m=+56.312749779" lastFinishedPulling="2025-09-06 01:01:25.90023808 +0000 UTC m=+62.697974766" observedRunningTime="2025-09-06 01:01:26.520271325 +0000 UTC m=+63.318008011" watchObservedRunningTime="2025-09-06 01:01:26.520520637 +0000 UTC m=+63.318257320" Sep 6 01:01:26.526000 audit[6317]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=6317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:26.550059 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 6 01:01:26.550146 kernel: audit: type=1325 audit(1757120486.526:402): table=filter:120 family=2 entries=14 op=nft_register_rule pid=6317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:26.526000 audit[6317]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe71dcaa80 a2=0 a3=7ffe71dcaa6c items=0 ppid=2820 pid=6317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:26.687799 kernel: audit: type=1300 audit(1757120486.526:402): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe71dcaa80 a2=0 a3=7ffe71dcaa6c items=0 ppid=2820 pid=6317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:26.687837 kernel: audit: type=1327 audit(1757120486.526:402): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:26.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:26.741000 audit[6317]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=6317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:26.741000 audit[6317]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe71dcaa80 a2=0 a3=7ffe71dcaa6c items=0 ppid=2820 pid=6317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:26.882371 kernel: audit: type=1325 audit(1757120486.741:403): table=nat:121 family=2 entries=20 op=nft_register_rule pid=6317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:26.882400 kernel: audit: type=1300 audit(1757120486.741:403): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe71dcaa80 a2=0 a3=7ffe71dcaa6c items=0 ppid=2820 pid=6317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:26.882414 kernel: audit: type=1327 audit(1757120486.741:403): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:26.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:28.993692 env[1665]: time="2025-09-06T01:01:28.993637107Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:28.994292 env[1665]: time="2025-09-06T01:01:28.994251958Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:28.994974 env[1665]: time="2025-09-06T01:01:28.994961803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:28.996012 env[1665]: time="2025-09-06T01:01:28.996001563Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:28.996181 env[1665]: time="2025-09-06T01:01:28.996168031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 6 01:01:28.997587 env[1665]: time="2025-09-06T01:01:28.997574429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 6 01:01:28.997904 env[1665]: time="2025-09-06T01:01:28.997890705Z" level=info msg="CreateContainer within sandbox \"7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 6 01:01:29.003444 env[1665]: time="2025-09-06T01:01:29.003380570Z" level=info msg="CreateContainer within sandbox \"7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"188d0e7c8e49d02b51d0b285fa48ad5240b1d433ae0574f78a7365d9d7517d40\"" Sep 6 01:01:29.003736 env[1665]: time="2025-09-06T01:01:29.003662812Z" level=info msg="StartContainer for \"188d0e7c8e49d02b51d0b285fa48ad5240b1d433ae0574f78a7365d9d7517d40\"" Sep 6 01:01:29.028267 env[1665]: time="2025-09-06T01:01:29.028240007Z" level=info msg="StartContainer for \"188d0e7c8e49d02b51d0b285fa48ad5240b1d433ae0574f78a7365d9d7517d40\" returns successfully" Sep 6 01:01:34.395901 env[1665]: time="2025-09-06T01:01:34.395846839Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:34.396461 env[1665]: time="2025-09-06T01:01:34.396436498Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:34.397082 env[1665]: time="2025-09-06T01:01:34.397043271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:34.398017 env[1665]: time="2025-09-06T01:01:34.397975783Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:34.398190 env[1665]: time="2025-09-06T01:01:34.398143901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 6 01:01:34.398824 env[1665]: time="2025-09-06T01:01:34.398809199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 01:01:34.402713 env[1665]: time="2025-09-06T01:01:34.402683225Z" level=info msg="CreateContainer within sandbox \"1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 6 01:01:34.407312 env[1665]: time="2025-09-06T01:01:34.407271096Z" level=info msg="CreateContainer within sandbox \"1c7ba1b479dd49d08e3ff556abb8110e006d4eb8dcab5cfe7f942f4ac81eac3f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3663b743e5ef4e6586fff894cee0031dcf73b835dd853ec62e13cb0cde0dbacb\"" Sep 6 01:01:34.407539 env[1665]: time="2025-09-06T01:01:34.407524655Z" level=info msg="StartContainer for \"3663b743e5ef4e6586fff894cee0031dcf73b835dd853ec62e13cb0cde0dbacb\"" Sep 6 01:01:34.440453 env[1665]: time="2025-09-06T01:01:34.440405113Z" level=info msg="StartContainer for \"3663b743e5ef4e6586fff894cee0031dcf73b835dd853ec62e13cb0cde0dbacb\" returns successfully" Sep 6 01:01:34.556312 kubelet[2664]: I0906 01:01:34.556229 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54d97c87d7-rc6mk" podStartSLOduration=39.930666286 podStartE2EDuration="54.556202406s" podCreationTimestamp="2025-09-06 01:00:40 +0000 UTC" firstStartedPulling="2025-09-06 01:01:19.773197369 +0000 UTC m=+56.570934060" lastFinishedPulling="2025-09-06 01:01:34.398733497 +0000 UTC m=+71.196470180" observedRunningTime="2025-09-06 01:01:34.555458651 +0000 UTC m=+71.353195389" watchObservedRunningTime="2025-09-06 01:01:34.556202406 +0000 UTC m=+71.353939118" Sep 6 01:01:37.399856 env[1665]: time="2025-09-06T01:01:37.399801541Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:37.400451 env[1665]: time="2025-09-06T01:01:37.400401527Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:37.401024 env[1665]: time="2025-09-06T01:01:37.400983158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:37.402017 env[1665]: time="2025-09-06T01:01:37.401976152Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:37.402192 env[1665]: time="2025-09-06T01:01:37.402151488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 6 01:01:37.402834 env[1665]: time="2025-09-06T01:01:37.402797898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 6 01:01:37.403205 env[1665]: time="2025-09-06T01:01:37.403190773Z" level=info msg="CreateContainer within sandbox \"3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 01:01:37.406851 env[1665]: time="2025-09-06T01:01:37.406809625Z" level=info msg="CreateContainer within sandbox \"3ef6ea4c7b7abdd070451537a4a639f7de80ba50a1d434d2642d25e4b036930b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"96de0fdce67897838ba39423c338fd7476cd3377049b1d3d6382f6078781a31e\"" Sep 6 01:01:37.407071 env[1665]: time="2025-09-06T01:01:37.407057524Z" level=info msg="StartContainer for \"96de0fdce67897838ba39423c338fd7476cd3377049b1d3d6382f6078781a31e\"" Sep 6 01:01:37.439640 env[1665]: time="2025-09-06T01:01:37.439593651Z" level=info msg="StartContainer for \"96de0fdce67897838ba39423c338fd7476cd3377049b1d3d6382f6078781a31e\" returns successfully" Sep 6 01:01:37.533000 audit[6640]: NETFILTER_CFG table=filter:122 family=2 entries=13 op=nft_register_rule pid=6640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:37.544462 kubelet[2664]: I0906 01:01:37.544434 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dff7d77c6-lds5m" podStartSLOduration=44.893954588 podStartE2EDuration="1m0.544422475s" podCreationTimestamp="2025-09-06 01:00:37 +0000 UTC" firstStartedPulling="2025-09-06 01:01:21.752170027 +0000 UTC m=+58.549906710" lastFinishedPulling="2025-09-06 01:01:37.402637915 +0000 UTC m=+74.200374597" observedRunningTime="2025-09-06 01:01:37.544073648 +0000 UTC m=+74.341810337" watchObservedRunningTime="2025-09-06 01:01:37.544422475 +0000 UTC m=+74.342159157" Sep 6 01:01:37.533000 audit[6640]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe1d428390 a2=0 a3=7ffe1d42837c items=0 ppid=2820 pid=6640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:37.685052 kernel: audit: type=1325 audit(1757120497.533:404): table=filter:122 family=2 entries=13 op=nft_register_rule pid=6640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:37.685109 kernel: audit: type=1300 audit(1757120497.533:404): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe1d428390 a2=0 a3=7ffe1d42837c items=0 ppid=2820 pid=6640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:37.685126 kernel: audit: type=1327 audit(1757120497.533:404): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:37.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:37.741000 audit[6640]: NETFILTER_CFG table=nat:123 family=2 entries=27 op=nft_register_chain pid=6640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:37.799781 kernel: audit: type=1325 audit(1757120497.741:405): table=nat:123 family=2 entries=27 op=nft_register_chain pid=6640 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:37.799859 kernel: audit: type=1300 audit(1757120497.741:405): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe1d428390 a2=0 a3=7ffe1d42837c items=0 ppid=2820 pid=6640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:37.741000 audit[6640]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe1d428390 a2=0 a3=7ffe1d42837c items=0 ppid=2820 pid=6640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:37.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:37.952491 kernel: audit: type=1327 audit(1757120497.741:405): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:37.966000 audit[6642]: NETFILTER_CFG table=filter:124 family=2 entries=12 op=nft_register_rule pid=6642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:37.966000 audit[6642]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe484de540 a2=0 a3=7ffe484de52c items=0 ppid=2820 pid=6642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:38.121659 kernel: audit: type=1325 audit(1757120497.966:406): table=filter:124 family=2 entries=12 op=nft_register_rule pid=6642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:38.121729 kernel: audit: type=1300 audit(1757120497.966:406): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe484de540 a2=0 a3=7ffe484de52c items=0 ppid=2820 pid=6642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:38.121743 kernel: audit: type=1327 audit(1757120497.966:406): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:37.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:38.190000 audit[6642]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=6642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:38.190000 audit[6642]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe484de540 a2=0 a3=7ffe484de52c items=0 ppid=2820 pid=6642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:38.190000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:38.249495 kernel: audit: type=1325 audit(1757120498.190:407): table=nat:125 family=2 entries=22 op=nft_register_rule pid=6642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:39.277000 audit[6645]: NETFILTER_CFG table=filter:126 family=2 entries=11 op=nft_register_rule pid=6645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:39.277000 audit[6645]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe03bcadb0 a2=0 a3=7ffe03bcad9c items=0 ppid=2820 pid=6645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:39.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:39.295000 audit[6645]: NETFILTER_CFG table=nat:127 family=2 entries=29 op=nft_register_chain pid=6645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:39.295000 audit[6645]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffe03bcadb0 a2=0 a3=7ffe03bcad9c items=0 ppid=2820 pid=6645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:39.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:40.333392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934973202.mount: Deactivated successfully. Sep 6 01:01:40.336635 env[1665]: time="2025-09-06T01:01:40.336580544Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:40.337213 env[1665]: time="2025-09-06T01:01:40.337171313Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:40.337883 env[1665]: time="2025-09-06T01:01:40.337843601Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:40.338606 env[1665]: time="2025-09-06T01:01:40.338561671Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:40.338942 env[1665]: time="2025-09-06T01:01:40.338899453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 6 01:01:40.339599 env[1665]: time="2025-09-06T01:01:40.339553509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 6 01:01:40.340039 env[1665]: time="2025-09-06T01:01:40.339990691Z" level=info msg="CreateContainer within sandbox \"95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 6 01:01:40.344017 env[1665]: time="2025-09-06T01:01:40.343960671Z" level=info msg="CreateContainer within sandbox \"95a532a7c8dad8fcc52c90401fb12a49d98339a15e427311cf34f2f6ec151c6e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"83325df0c7ed905611c4f854e1c4d41ac19bc4651fb5f475a4fbafe396fdb60d\"" Sep 6 01:01:40.344238 env[1665]: time="2025-09-06T01:01:40.344194889Z" level=info msg="StartContainer for \"83325df0c7ed905611c4f854e1c4d41ac19bc4651fb5f475a4fbafe396fdb60d\"" Sep 6 01:01:40.375861 env[1665]: time="2025-09-06T01:01:40.375792514Z" level=info msg="StartContainer for \"83325df0c7ed905611c4f854e1c4d41ac19bc4651fb5f475a4fbafe396fdb60d\" returns successfully" Sep 6 01:01:40.566992 kubelet[2664]: I0906 01:01:40.566890 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5579c8f56-68kf2" podStartSLOduration=1.566634445 podStartE2EDuration="25.566855583s" podCreationTimestamp="2025-09-06 01:01:15 +0000 UTC" firstStartedPulling="2025-09-06 01:01:16.339202662 +0000 UTC m=+53.136939346" lastFinishedPulling="2025-09-06 01:01:40.339423802 +0000 UTC m=+77.137160484" observedRunningTime="2025-09-06 01:01:40.565774588 +0000 UTC m=+77.363511328" watchObservedRunningTime="2025-09-06 01:01:40.566855583 +0000 UTC m=+77.364592305" Sep 6 01:01:40.587000 audit[6693]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=6693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:40.587000 audit[6693]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe81d62970 a2=0 a3=7ffe81d6295c items=0 ppid=2820 pid=6693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:40.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:40.606000 audit[6693]: NETFILTER_CFG table=nat:129 family=2 entries=31 op=nft_register_chain pid=6693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:40.606000 audit[6693]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffe81d62970 a2=0 a3=7ffe81d6295c items=0 ppid=2820 pid=6693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:40.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:40.985504 env[1665]: time="2025-09-06T01:01:40.985383075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:40.987205 env[1665]: time="2025-09-06T01:01:40.987147095Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:40.990541 env[1665]: time="2025-09-06T01:01:40.990415649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:40.994301 env[1665]: time="2025-09-06T01:01:40.994232983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:40.996161 env[1665]: time="2025-09-06T01:01:40.996073679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 6 01:01:40.998790 env[1665]: time="2025-09-06T01:01:40.998721009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 6 01:01:41.001104 env[1665]: time="2025-09-06T01:01:41.001036285Z" level=info msg="CreateContainer within sandbox \"77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 6 01:01:41.011927 env[1665]: time="2025-09-06T01:01:41.011849488Z" level=info msg="CreateContainer within sandbox \"77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cfc69c9ce13a77cfaa22da68550929463e0937474489968d8dfd1699cbafd183\"" Sep 6 01:01:41.012754 env[1665]: time="2025-09-06T01:01:41.012675632Z" level=info msg="StartContainer for \"cfc69c9ce13a77cfaa22da68550929463e0937474489968d8dfd1699cbafd183\"" Sep 6 01:01:41.073226 env[1665]: time="2025-09-06T01:01:41.073198909Z" level=info msg="StartContainer for \"cfc69c9ce13a77cfaa22da68550929463e0937474489968d8dfd1699cbafd183\" returns successfully" Sep 6 01:01:41.563013 kubelet[2664]: I0906 01:01:41.562966 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6dff7d77c6-jpzhj" podStartSLOduration=47.368338156 podStartE2EDuration="1m4.562950798s" podCreationTimestamp="2025-09-06 01:00:37 +0000 UTC" firstStartedPulling="2025-09-06 01:01:23.803680077 +0000 UTC m=+60.601416762" lastFinishedPulling="2025-09-06 01:01:40.998292649 +0000 UTC m=+77.796029404" observedRunningTime="2025-09-06 01:01:41.562472052 +0000 UTC m=+78.360208738" watchObservedRunningTime="2025-09-06 01:01:41.562950798 +0000 UTC m=+78.360687481" Sep 6 01:01:41.569000 audit[6746]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=6746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:41.569000 audit[6746]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffc881f9b50 a2=0 a3=7ffc881f9b3c items=0 ppid=2820 pid=6746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:41.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:41.583000 audit[6746]: NETFILTER_CFG table=nat:131 family=2 entries=34 op=nft_register_rule pid=6746 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:41.583000 audit[6746]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffc881f9b50 a2=0 a3=7ffc881f9b3c items=0 ppid=2820 pid=6746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:41.583000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:41.876000 audit[6748]: NETFILTER_CFG table=filter:132 family=2 entries=8 op=nft_register_rule pid=6748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:41.876000 audit[6748]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe7c4c45e0 a2=0 a3=7ffe7c4c45cc items=0 ppid=2820 pid=6748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:41.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:41.887000 audit[6748]: NETFILTER_CFG table=nat:133 family=2 entries=38 op=nft_register_chain pid=6748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:01:41.887000 audit[6748]: SYSCALL arch=c000003e syscall=46 success=yes exit=12772 a0=3 a1=7ffe7c4c45e0 a2=0 a3=7ffe7c4c45cc items=0 ppid=2820 pid=6748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:01:41.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:01:43.347484 env[1665]: time="2025-09-06T01:01:43.347457236Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:43.348110 env[1665]: time="2025-09-06T01:01:43.348096763Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:43.348692 env[1665]: time="2025-09-06T01:01:43.348680689Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:43.349332 env[1665]: time="2025-09-06T01:01:43.349321800Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:01:43.349617 env[1665]: time="2025-09-06T01:01:43.349603903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 6 01:01:43.350691 env[1665]: time="2025-09-06T01:01:43.350664422Z" level=info msg="CreateContainer within sandbox \"7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 6 01:01:43.355018 env[1665]: time="2025-09-06T01:01:43.354972802Z" level=info msg="CreateContainer within sandbox \"7db43b5064a53421162b4cc04dbc4fe6c7d0c82772eb4a36bbdb547ee1f7a9e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b392d2cd46c50e32f2d18bbdf0b37426f21b1abad76b2518d1b0fcb18a84a0ec\"" Sep 6 01:01:43.355255 env[1665]: time="2025-09-06T01:01:43.355209898Z" level=info msg="StartContainer for \"b392d2cd46c50e32f2d18bbdf0b37426f21b1abad76b2518d1b0fcb18a84a0ec\"" Sep 6 01:01:43.378209 env[1665]: time="2025-09-06T01:01:43.378157207Z" level=info msg="StartContainer for \"b392d2cd46c50e32f2d18bbdf0b37426f21b1abad76b2518d1b0fcb18a84a0ec\" returns successfully" Sep 6 01:01:43.582141 kubelet[2664]: I0906 01:01:43.582004 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-twvsq" podStartSLOduration=40.811356167 podStartE2EDuration="1m4.581966199s" podCreationTimestamp="2025-09-06 01:00:39 +0000 UTC" firstStartedPulling="2025-09-06 01:01:19.579379955 +0000 UTC m=+56.377116638" lastFinishedPulling="2025-09-06 01:01:43.349989978 +0000 UTC m=+80.147726670" observedRunningTime="2025-09-06 01:01:43.581055911 +0000 UTC m=+80.378792674" watchObservedRunningTime="2025-09-06 01:01:43.581966199 +0000 UTC m=+80.379702930" Sep 6 01:01:44.375583 kubelet[2664]: I0906 01:01:44.375524 2664 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 6 01:01:44.375583 kubelet[2664]: I0906 01:01:44.375591 2664 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 6 01:02:24.234271 env[1665]: time="2025-09-06T01:02:24.234178652Z" level=info msg="StopPodSandbox for \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\"" Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.262 [WARNING][6928] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0", GenerateName:"calico-apiserver-6dff7d77c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff7d77c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626", Pod:"calico-apiserver-6dff7d77c6-jpzhj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ba29c9d3d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.262 [INFO][6928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.262 [INFO][6928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" iface="eth0" netns="" Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.262 [INFO][6928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.262 [INFO][6928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.272 [INFO][6944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" HandleID="k8s-pod-network.86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.272 [INFO][6944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.272 [INFO][6944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.276 [WARNING][6944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" HandleID="k8s-pod-network.86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.276 [INFO][6944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" HandleID="k8s-pod-network.86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.277 [INFO][6944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:02:24.278863 env[1665]: 2025-09-06 01:02:24.278 [INFO][6928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:02:24.279493 env[1665]: time="2025-09-06T01:02:24.278878119Z" level=info msg="TearDown network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\" successfully" Sep 6 01:02:24.279493 env[1665]: time="2025-09-06T01:02:24.278911627Z" level=info msg="StopPodSandbox for \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\" returns successfully" Sep 6 01:02:24.279493 env[1665]: time="2025-09-06T01:02:24.279267481Z" level=info msg="RemovePodSandbox for \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\"" Sep 6 01:02:24.279493 env[1665]: time="2025-09-06T01:02:24.279294817Z" level=info msg="Forcibly stopping sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\"" Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.296 [WARNING][6971] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0", GenerateName:"calico-apiserver-6dff7d77c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffa3a4eb-9805-4bd2-aea7-ce1a33fd62ff", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dff7d77c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"77605786df642a265ea67bf41c38903a8ec4feb331d95dc9b51c15067f50f626", Pod:"calico-apiserver-6dff7d77c6-jpzhj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ba29c9d3d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.296 [INFO][6971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.296 [INFO][6971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" iface="eth0" netns="" Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.296 [INFO][6971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.296 [INFO][6971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.305 [INFO][6987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" HandleID="k8s-pod-network.86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.305 [INFO][6987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.305 [INFO][6987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.309 [WARNING][6987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" HandleID="k8s-pod-network.86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.309 [INFO][6987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" HandleID="k8s-pod-network.86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-calico--apiserver--6dff7d77c6--jpzhj-eth0" Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.310 [INFO][6987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:02:24.311742 env[1665]: 2025-09-06 01:02:24.310 [INFO][6971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece" Sep 6 01:02:24.312247 env[1665]: time="2025-09-06T01:02:24.311764031Z" level=info msg="TearDown network for sandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\" successfully" Sep 6 01:02:24.313501 env[1665]: time="2025-09-06T01:02:24.313487072Z" level=info msg="RemovePodSandbox \"86782f41d4f58a3e65217c2d484a81270c741eeace2e7092b12b4e805b997ece\" returns successfully" Sep 6 01:02:24.313752 env[1665]: time="2025-09-06T01:02:24.313737465Z" level=info msg="StopPodSandbox for \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\"" Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.333 [WARNING][7010] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"85744052-5f5c-49af-a21e-68c0336acf1a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d", Pod:"coredns-7c65d6cfc9-q529h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali778439a76dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.333 [INFO][7010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.333 [INFO][7010] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" iface="eth0" netns="" Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.333 [INFO][7010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.333 [INFO][7010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.345 [INFO][7024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" HandleID="k8s-pod-network.bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.345 [INFO][7024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.345 [INFO][7024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.349 [WARNING][7024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" HandleID="k8s-pod-network.bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.349 [INFO][7024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" HandleID="k8s-pod-network.bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.351 [INFO][7024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:02:24.352839 env[1665]: 2025-09-06 01:02:24.351 [INFO][7010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:02:24.353270 env[1665]: time="2025-09-06T01:02:24.352861854Z" level=info msg="TearDown network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\" successfully" Sep 6 01:02:24.353270 env[1665]: time="2025-09-06T01:02:24.352889641Z" level=info msg="StopPodSandbox for \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\" returns successfully" Sep 6 01:02:24.353270 env[1665]: time="2025-09-06T01:02:24.353197849Z" level=info msg="RemovePodSandbox for \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\"" Sep 6 01:02:24.353270 env[1665]: time="2025-09-06T01:02:24.353218836Z" level=info msg="Forcibly stopping sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\"" Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.372 [WARNING][7050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"85744052-5f5c-49af-a21e-68c0336acf1a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 6, 1, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-4cc2a8c2f2", ContainerID:"830bf6a50cb9644a152e97e14306e53e81c68b8e17c8da9873e3d9285261924d", Pod:"coredns-7c65d6cfc9-q529h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali778439a76dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.372 [INFO][7050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.372 [INFO][7050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" iface="eth0" netns="" Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.372 [INFO][7050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.372 [INFO][7050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.384 [INFO][7067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" HandleID="k8s-pod-network.bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.384 [INFO][7067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.384 [INFO][7067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.388 [WARNING][7067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" HandleID="k8s-pod-network.bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.388 [INFO][7067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" HandleID="k8s-pod-network.bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Workload="ci--3510.3.8--n--4cc2a8c2f2-k8s-coredns--7c65d6cfc9--q529h-eth0" Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.390 [INFO][7067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 6 01:02:24.391529 env[1665]: 2025-09-06 01:02:24.390 [INFO][7050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5" Sep 6 01:02:24.391928 env[1665]: time="2025-09-06T01:02:24.391546051Z" level=info msg="TearDown network for sandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\" successfully" Sep 6 01:02:24.393168 env[1665]: time="2025-09-06T01:02:24.393125857Z" level=info msg="RemovePodSandbox \"bf73a01df9d190cf66939d7195a81559a0038e1b65d00921597f8e21499c74e5\" returns successfully" Sep 6 01:06:47.561239 systemd[1]: Started sshd@9-139.178.90.135:22-139.178.68.195:38682.service. Sep 6 01:06:47.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.90.135:22-139.178.68.195:38682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:47.588172 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 6 01:06:47.588229 kernel: audit: type=1130 audit(1757120807.559:416): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.90.135:22-139.178.68.195:38682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:47.701000 audit[8161]: USER_ACCT pid=8161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:47.702339 sshd[8161]: Accepted publickey for core from 139.178.68.195 port 38682 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:06:47.703764 sshd[8161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:06:47.706808 systemd-logind[1704]: New session 12 of user core. Sep 6 01:06:47.707868 systemd[1]: Started session-12.scope. Sep 6 01:06:47.790449 sshd[8161]: pam_unix(sshd:session): session closed for user core Sep 6 01:06:47.792411 systemd[1]: sshd@9-139.178.90.135:22-139.178.68.195:38682.service: Deactivated successfully. Sep 6 01:06:47.793071 systemd-logind[1704]: Session 12 logged out. Waiting for processes to exit. Sep 6 01:06:47.793079 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 01:06:47.702000 audit[8161]: CRED_ACQ pid=8161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:47.793503 systemd-logind[1704]: Removed session 12. Sep 6 01:06:47.884174 kernel: audit: type=1101 audit(1757120807.701:417): pid=8161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:47.884214 kernel: audit: type=1103 audit(1757120807.702:418): pid=8161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:47.884234 kernel: audit: type=1006 audit(1757120807.702:419): pid=8161 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Sep 6 01:06:47.943586 kernel: audit: type=1300 audit(1757120807.702:419): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdca3cc750 a2=3 a3=0 items=0 ppid=1 pid=8161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:06:47.702000 audit[8161]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdca3cc750 a2=3 a3=0 items=0 ppid=1 pid=8161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:06:48.036975 kernel: audit: type=1327 audit(1757120807.702:419): proctitle=737368643A20636F7265205B707269765D Sep 6 01:06:47.702000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:06:48.067862 kernel: audit: type=1105 audit(1757120807.708:420): pid=8161 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:47.708000 audit[8161]: USER_START pid=8161 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:48.163746 kernel: audit: type=1103 audit(1757120807.709:421): pid=8164 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:47.709000 audit[8164]: CRED_ACQ pid=8164 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:48.253496 kernel: audit: type=1106 audit(1757120807.789:422): pid=8161 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:47.789000 audit[8161]: USER_END pid=8161 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:48.349397 kernel: audit: type=1104 audit(1757120807.790:423): pid=8161 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:47.790000 audit[8161]: CRED_DISP pid=8161 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:47.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.90.135:22-139.178.68.195:38682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:52.798236 systemd[1]: Started sshd@10-139.178.90.135:22-139.178.68.195:47152.service. Sep 6 01:06:52.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.90.135:22-139.178.68.195:47152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:52.830443 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:06:52.830571 kernel: audit: type=1130 audit(1757120812.797:425): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.90.135:22-139.178.68.195:47152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:52.940742 sshd[8203]: Accepted publickey for core from 139.178.68.195 port 47152 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:06:52.939000 audit[8203]: USER_ACCT pid=8203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:52.942764 sshd[8203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:06:52.945231 systemd-logind[1704]: New session 13 of user core. Sep 6 01:06:52.945824 systemd[1]: Started session-13.scope. Sep 6 01:06:52.941000 audit[8203]: CRED_ACQ pid=8203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.123106 kernel: audit: type=1101 audit(1757120812.939:426): pid=8203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.123222 kernel: audit: type=1103 audit(1757120812.941:427): pid=8203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.123265 kernel: audit: type=1006 audit(1757120812.941:428): pid=8203 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 6 01:06:53.125448 sshd[8203]: pam_unix(sshd:session): session closed for user core Sep 6 01:06:53.126659 systemd[1]: sshd@10-139.178.90.135:22-139.178.68.195:47152.service: Deactivated successfully. Sep 6 01:06:53.127256 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 01:06:53.127262 systemd-logind[1704]: Session 13 logged out. Waiting for processes to exit. Sep 6 01:06:53.127741 systemd-logind[1704]: Removed session 13. Sep 6 01:06:52.941000 audit[8203]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff58e53350 a2=3 a3=0 items=0 ppid=1 pid=8203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:06:53.273682 kernel: audit: type=1300 audit(1757120812.941:428): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff58e53350 a2=3 a3=0 items=0 ppid=1 pid=8203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:06:53.273799 kernel: audit: type=1327 audit(1757120812.941:428): proctitle=737368643A20636F7265205B707269765D Sep 6 01:06:52.941000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:06:52.946000 audit[8203]: USER_START pid=8203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.398667 kernel: audit: type=1105 audit(1757120812.946:429): pid=8203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.398758 kernel: audit: type=1103 audit(1757120812.947:430): pid=8206 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:52.947000 audit[8206]: CRED_ACQ pid=8206 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.124000 audit[8203]: USER_END pid=8203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.488443 kernel: audit: type=1106 audit(1757120813.124:431): pid=8203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.124000 audit[8203]: CRED_DISP pid=8203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.672958 kernel: audit: type=1104 audit(1757120813.124:432): pid=8203 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:53.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.90.135:22-139.178.68.195:47152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:58.132490 systemd[1]: Started sshd@11-139.178.90.135:22-139.178.68.195:47162.service. Sep 6 01:06:58.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.90.135:22-139.178.68.195:47162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:58.159753 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:06:58.159850 kernel: audit: type=1130 audit(1757120818.131:434): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.90.135:22-139.178.68.195:47162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:58.273000 audit[8232]: USER_ACCT pid=8232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.274737 sshd[8232]: Accepted publickey for core from 139.178.68.195 port 47162 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:06:58.277044 sshd[8232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:06:58.279710 systemd-logind[1704]: New session 14 of user core. Sep 6 01:06:58.280207 systemd[1]: Started session-14.scope. Sep 6 01:06:58.357514 sshd[8232]: pam_unix(sshd:session): session closed for user core Sep 6 01:06:58.359124 systemd[1]: Started sshd@12-139.178.90.135:22-139.178.68.195:47180.service. Sep 6 01:06:58.359453 systemd[1]: sshd@11-139.178.90.135:22-139.178.68.195:47162.service: Deactivated successfully. Sep 6 01:06:58.359967 systemd-logind[1704]: Session 14 logged out. Waiting for processes to exit. Sep 6 01:06:58.360014 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 01:06:58.360402 systemd-logind[1704]: Removed session 14. Sep 6 01:06:58.275000 audit[8232]: CRED_ACQ pid=8232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.456432 kernel: audit: type=1101 audit(1757120818.273:435): pid=8232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.456470 kernel: audit: type=1103 audit(1757120818.275:436): pid=8232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.456492 kernel: audit: type=1006 audit(1757120818.275:437): pid=8232 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 6 01:06:58.514907 kernel: audit: type=1300 audit(1757120818.275:437): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec5e060a0 a2=3 a3=0 items=0 ppid=1 pid=8232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:06:58.275000 audit[8232]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec5e060a0 a2=3 a3=0 items=0 ppid=1 pid=8232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:06:58.539423 sshd[8258]: Accepted publickey for core from 139.178.68.195 port 47180 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:06:58.540561 sshd[8258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:06:58.543104 systemd-logind[1704]: New session 15 of user core. Sep 6 01:06:58.543537 systemd[1]: Started session-15.scope. Sep 6 01:06:58.606873 kernel: audit: type=1327 audit(1757120818.275:437): proctitle=737368643A20636F7265205B707269765D Sep 6 01:06:58.275000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:06:58.637099 sshd[8258]: pam_unix(sshd:session): session closed for user core Sep 6 01:06:58.637286 kernel: audit: type=1105 audit(1757120818.280:438): pid=8232 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.280000 audit[8232]: USER_START pid=8232 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.639052 systemd[1]: Started sshd@13-139.178.90.135:22-139.178.68.195:47190.service. Sep 6 01:06:58.639514 systemd[1]: sshd@12-139.178.90.135:22-139.178.68.195:47180.service: Deactivated successfully. Sep 6 01:06:58.640148 systemd-logind[1704]: Session 15 logged out. Waiting for processes to exit. Sep 6 01:06:58.640206 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 01:06:58.640731 systemd-logind[1704]: Removed session 15. Sep 6 01:06:58.731707 kernel: audit: type=1103 audit(1757120818.281:439): pid=8235 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.281000 audit[8235]: CRED_ACQ pid=8235 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.356000 audit[8232]: USER_END pid=8232 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.845197 sshd[8282]: Accepted publickey for core from 139.178.68.195 port 47190 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:06:58.847064 sshd[8282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:06:58.849548 systemd-logind[1704]: New session 16 of user core. Sep 6 01:06:58.850205 systemd[1]: Started session-16.scope. Sep 6 01:06:58.916173 kernel: audit: type=1106 audit(1757120818.356:440): pid=8232 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.916244 kernel: audit: type=1104 audit(1757120818.356:441): pid=8232 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.356000 audit[8232]: CRED_DISP pid=8232 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.90.135:22-139.178.68.195:47180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:58.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.90.135:22-139.178.68.195:47162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:58.538000 audit[8258]: USER_ACCT pid=8258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.538000 audit[8258]: CRED_ACQ pid=8258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.538000 audit[8258]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc176229a0 a2=3 a3=0 items=0 ppid=1 pid=8258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:06:58.538000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:06:58.543000 audit[8258]: USER_START pid=8258 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.544000 audit[8262]: CRED_ACQ pid=8262 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.636000 audit[8258]: USER_END pid=8258 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.636000 audit[8258]: CRED_DISP pid=8258 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.90.135:22-139.178.68.195:47190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:58.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.90.135:22-139.178.68.195:47180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:58.844000 audit[8282]: USER_ACCT pid=8282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.845000 audit[8282]: CRED_ACQ pid=8282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.845000 audit[8282]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc20d3db20 a2=3 a3=0 items=0 ppid=1 pid=8282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:06:58.845000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:06:58.850000 audit[8282]: USER_START pid=8282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:58.851000 audit[8286]: CRED_ACQ pid=8286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:59.008479 sshd[8282]: pam_unix(sshd:session): session closed for user core Sep 6 01:06:59.007000 audit[8282]: USER_END pid=8282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:59.007000 audit[8282]: CRED_DISP pid=8282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:06:59.009832 systemd[1]: sshd@13-139.178.90.135:22-139.178.68.195:47190.service: Deactivated successfully. Sep 6 01:06:59.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.90.135:22-139.178.68.195:47190 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:06:59.010363 systemd-logind[1704]: Session 16 logged out. Waiting for processes to exit. Sep 6 01:06:59.010399 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 01:06:59.010838 systemd-logind[1704]: Removed session 16. Sep 6 01:07:04.012366 systemd[1]: Started sshd@14-139.178.90.135:22-139.178.68.195:42552.service. Sep 6 01:07:04.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.90.135:22-139.178.68.195:42552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:04.039353 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 6 01:07:04.039427 kernel: audit: type=1130 audit(1757120824.011:461): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.90.135:22-139.178.68.195:42552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:04.153000 audit[8345]: USER_ACCT pid=8345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.153964 sshd[8345]: Accepted publickey for core from 139.178.68.195 port 42552 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:04.154750 sshd[8345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:04.157165 systemd-logind[1704]: New session 17 of user core. Sep 6 01:07:04.157638 systemd[1]: Started session-17.scope. Sep 6 01:07:04.153000 audit[8345]: CRED_ACQ pid=8345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.335645 kernel: audit: type=1101 audit(1757120824.153:462): pid=8345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.335686 kernel: audit: type=1103 audit(1757120824.153:463): pid=8345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.335703 kernel: audit: type=1006 audit(1757120824.153:464): pid=8345 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 6 01:07:04.394198 kernel: audit: type=1300 audit(1757120824.153:464): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd17c30750 a2=3 a3=0 items=0 ppid=1 pid=8345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:04.153000 audit[8345]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd17c30750 a2=3 a3=0 items=0 ppid=1 pid=8345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:04.153000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:04.516678 kernel: audit: type=1327 audit(1757120824.153:464): proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:04.516715 kernel: audit: type=1105 audit(1757120824.159:465): pid=8345 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.159000 audit[8345]: USER_START pid=8345 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.159000 audit[8348]: CRED_ACQ pid=8348 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.611320 sshd[8345]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:04.612845 systemd[1]: sshd@14-139.178.90.135:22-139.178.68.195:42552.service: Deactivated successfully. Sep 6 01:07:04.613447 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 01:07:04.613488 systemd-logind[1704]: Session 17 logged out. Waiting for processes to exit. Sep 6 01:07:04.614042 systemd-logind[1704]: Removed session 17. Sep 6 01:07:04.700187 kernel: audit: type=1103 audit(1757120824.159:466): pid=8348 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.700228 kernel: audit: type=1106 audit(1757120824.610:467): pid=8345 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.610000 audit[8345]: USER_END pid=8345 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.795642 kernel: audit: type=1104 audit(1757120824.610:468): pid=8345 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.610000 audit[8345]: CRED_DISP pid=8345 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:04.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.90.135:22-139.178.68.195:42552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:09.525526 systemd[1]: Started sshd@15-139.178.90.135:22-139.178.68.195:42566.service. Sep 6 01:07:09.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.90.135:22-139.178.68.195:42566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:09.552290 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:07:09.552378 kernel: audit: type=1130 audit(1757120829.524:470): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.90.135:22-139.178.68.195:42566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:09.666000 audit[8454]: USER_ACCT pid=8454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:09.666965 sshd[8454]: Accepted publickey for core from 139.178.68.195 port 42566 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:09.667746 sshd[8454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:09.670338 systemd-logind[1704]: New session 18 of user core. Sep 6 01:07:09.670891 systemd[1]: Started session-18.scope. Sep 6 01:07:09.666000 audit[8454]: CRED_ACQ pid=8454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:09.848503 kernel: audit: type=1101 audit(1757120829.666:471): pid=8454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:09.848543 kernel: audit: type=1103 audit(1757120829.666:472): pid=8454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:09.848565 kernel: audit: type=1006 audit(1757120829.666:473): pid=8454 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 6 01:07:09.907052 kernel: audit: type=1300 audit(1757120829.666:473): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7af12570 a2=3 a3=0 items=0 ppid=1 pid=8454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:09.666000 audit[8454]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7af12570 a2=3 a3=0 items=0 ppid=1 pid=8454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:09.998950 kernel: audit: type=1327 audit(1757120829.666:473): proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:09.666000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:10.029381 kernel: audit: type=1105 audit(1757120829.672:474): pid=8454 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:09.672000 audit[8454]: USER_START pid=8454 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:09.672000 audit[8457]: CRED_ACQ pid=8457 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:10.123994 sshd[8454]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:10.125489 systemd[1]: sshd@15-139.178.90.135:22-139.178.68.195:42566.service: Deactivated successfully. Sep 6 01:07:10.126123 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 01:07:10.126135 systemd-logind[1704]: Session 18 logged out. Waiting for processes to exit. Sep 6 01:07:10.126668 systemd-logind[1704]: Removed session 18. Sep 6 01:07:10.212821 kernel: audit: type=1103 audit(1757120829.672:475): pid=8457 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:10.212862 kernel: audit: type=1106 audit(1757120830.123:476): pid=8454 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:10.123000 audit[8454]: USER_END pid=8454 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:10.308218 kernel: audit: type=1104 audit(1757120830.123:477): pid=8454 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:10.123000 audit[8454]: CRED_DISP pid=8454 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:10.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.90.135:22-139.178.68.195:42566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:15.039573 systemd[1]: Started sshd@16-139.178.90.135:22-139.178.68.195:56400.service. Sep 6 01:07:15.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.90.135:22-139.178.68.195:56400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:15.066927 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:07:15.066997 kernel: audit: type=1130 audit(1757120835.038:479): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.90.135:22-139.178.68.195:56400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:15.180000 audit[8481]: USER_ACCT pid=8481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.181281 sshd[8481]: Accepted publickey for core from 139.178.68.195 port 56400 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:15.182784 sshd[8481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:15.185615 systemd-logind[1704]: New session 19 of user core. Sep 6 01:07:15.186049 systemd[1]: Started session-19.scope. Sep 6 01:07:15.181000 audit[8481]: CRED_ACQ pid=8481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.274945 sshd[8481]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:15.276564 systemd[1]: Started sshd@17-139.178.90.135:22-139.178.68.195:56406.service. Sep 6 01:07:15.276887 systemd[1]: sshd@16-139.178.90.135:22-139.178.68.195:56400.service: Deactivated successfully. Sep 6 01:07:15.277424 systemd-logind[1704]: Session 19 logged out. Waiting for processes to exit. Sep 6 01:07:15.277435 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 01:07:15.278004 systemd-logind[1704]: Removed session 19. Sep 6 01:07:15.362790 kernel: audit: type=1101 audit(1757120835.180:480): pid=8481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.362840 kernel: audit: type=1103 audit(1757120835.181:481): pid=8481 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.362858 kernel: audit: type=1006 audit(1757120835.181:482): pid=8481 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Sep 6 01:07:15.421296 kernel: audit: type=1300 audit(1757120835.181:482): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd900493f0 a2=3 a3=0 items=0 ppid=1 pid=8481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:15.181000 audit[8481]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd900493f0 a2=3 a3=0 items=0 ppid=1 pid=8481 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:15.445895 sshd[8505]: Accepted publickey for core from 139.178.68.195 port 56406 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:15.448735 sshd[8505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:15.451573 systemd-logind[1704]: New session 20 of user core. Sep 6 01:07:15.451941 systemd[1]: Started session-20.scope. Sep 6 01:07:15.181000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:15.543619 kernel: audit: type=1327 audit(1757120835.181:482): proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:15.543686 kernel: audit: type=1105 audit(1757120835.187:483): pid=8481 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.187000 audit[8481]: USER_START pid=8481 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.571708 sshd[8505]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:15.573279 systemd[1]: Started sshd@18-139.178.90.135:22-139.178.68.195:56418.service. Sep 6 01:07:15.573618 systemd[1]: sshd@17-139.178.90.135:22-139.178.68.195:56406.service: Deactivated successfully. Sep 6 01:07:15.574164 systemd-logind[1704]: Session 20 logged out. Waiting for processes to exit. Sep 6 01:07:15.574189 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 01:07:15.574837 systemd-logind[1704]: Removed session 20. Sep 6 01:07:15.637970 kernel: audit: type=1103 audit(1757120835.187:484): pid=8484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.187000 audit[8484]: CRED_ACQ pid=8484 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.274000 audit[8481]: USER_END pid=8481 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.751767 sshd[8529]: Accepted publickey for core from 139.178.68.195 port 56418 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:15.754721 sshd[8529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:15.757400 systemd-logind[1704]: New session 21 of user core. Sep 6 01:07:15.757833 systemd[1]: Started session-21.scope. Sep 6 01:07:15.822464 kernel: audit: type=1106 audit(1757120835.274:485): pid=8481 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.274000 audit[8481]: CRED_DISP pid=8481 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.911722 kernel: audit: type=1104 audit(1757120835.274:486): pid=8481 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.90.135:22-139.178.68.195:56406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:15.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.90.135:22-139.178.68.195:56400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:15.444000 audit[8505]: USER_ACCT pid=8505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.448000 audit[8505]: CRED_ACQ pid=8505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.448000 audit[8505]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc703353f0 a2=3 a3=0 items=0 ppid=1 pid=8505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:15.448000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:15.454000 audit[8505]: USER_START pid=8505 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.455000 audit[8510]: CRED_ACQ pid=8510 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.571000 audit[8505]: USER_END pid=8505 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.571000 audit[8505]: CRED_DISP pid=8505 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.90.135:22-139.178.68.195:56418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:15.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.90.135:22-139.178.68.195:56406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:15.751000 audit[8529]: USER_ACCT pid=8529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.753000 audit[8529]: CRED_ACQ pid=8529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.753000 audit[8529]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdb7b3600 a2=3 a3=0 items=0 ppid=1 pid=8529 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:15.753000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:15.760000 audit[8529]: USER_START pid=8529 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:15.760000 audit[8534]: CRED_ACQ pid=8534 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:16.907000 audit[8561]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=8561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:07:16.907000 audit[8561]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fffd4be80b0 a2=0 a3=7fffd4be809c items=0 ppid=2820 pid=8561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:16.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:07:16.915628 sshd[8529]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:16.914000 audit[8529]: USER_END pid=8529 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:16.914000 audit[8529]: CRED_DISP pid=8529 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:16.917259 systemd[1]: Started sshd@19-139.178.90.135:22-139.178.68.195:56430.service. Sep 6 01:07:16.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.90.135:22-139.178.68.195:56430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:16.917582 systemd[1]: sshd@18-139.178.90.135:22-139.178.68.195:56418.service: Deactivated successfully. Sep 6 01:07:16.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.90.135:22-139.178.68.195:56418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:16.918254 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 01:07:16.918351 systemd-logind[1704]: Session 21 logged out. Waiting for processes to exit. Sep 6 01:07:16.918863 systemd-logind[1704]: Removed session 21. Sep 6 01:07:16.918000 audit[8561]: NETFILTER_CFG table=nat:135 family=2 entries=26 op=nft_register_rule pid=8561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:07:16.918000 audit[8561]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fffd4be80b0 a2=0 a3=0 items=0 ppid=2820 pid=8561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:16.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:07:16.933000 audit[8568]: NETFILTER_CFG table=filter:136 family=2 entries=32 op=nft_register_rule pid=8568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:07:16.933000 audit[8568]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc4cb31510 a2=0 a3=7ffc4cb314fc items=0 ppid=2820 pid=8568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:16.933000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:07:16.945000 audit[8568]: NETFILTER_CFG table=nat:137 family=2 entries=26 op=nft_register_rule pid=8568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:07:16.945000 audit[8568]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffc4cb31510 a2=0 a3=0 items=0 ppid=2820 pid=8568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:16.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:07:16.947969 sshd[8564]: Accepted publickey for core from 139.178.68.195 port 56430 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:16.947000 audit[8564]: USER_ACCT pid=8564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:16.947000 audit[8564]: CRED_ACQ pid=8564 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:16.947000 audit[8564]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd1bd2a20 a2=3 a3=0 items=0 ppid=1 pid=8564 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:16.947000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:16.948849 sshd[8564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:16.951599 systemd-logind[1704]: New session 22 of user core. Sep 6 01:07:16.952081 systemd[1]: Started session-22.scope. Sep 6 01:07:16.952000 audit[8564]: USER_START pid=8564 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:16.953000 audit[8572]: CRED_ACQ pid=8572 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:17.128212 sshd[8564]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:17.127000 audit[8564]: USER_END pid=8564 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:17.127000 audit[8564]: CRED_DISP pid=8564 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:17.130512 systemd[1]: Started sshd@20-139.178.90.135:22-139.178.68.195:56434.service. Sep 6 01:07:17.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.90.135:22-139.178.68.195:56434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:17.130952 systemd[1]: sshd@19-139.178.90.135:22-139.178.68.195:56430.service: Deactivated successfully. Sep 6 01:07:17.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.90.135:22-139.178.68.195:56430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:17.131592 systemd-logind[1704]: Session 22 logged out. Waiting for processes to exit. Sep 6 01:07:17.131631 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 01:07:17.132093 systemd-logind[1704]: Removed session 22. Sep 6 01:07:17.174000 audit[8592]: USER_ACCT pid=8592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:17.176655 sshd[8592]: Accepted publickey for core from 139.178.68.195 port 56434 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:17.177000 audit[8592]: CRED_ACQ pid=8592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:17.177000 audit[8592]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf93d1f90 a2=3 a3=0 items=0 ppid=1 pid=8592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:17.177000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:17.179604 sshd[8592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:17.189587 systemd-logind[1704]: New session 23 of user core. Sep 6 01:07:17.191826 systemd[1]: Started session-23.scope. Sep 6 01:07:17.204000 audit[8592]: USER_START pid=8592 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:17.207000 audit[8597]: CRED_ACQ pid=8597 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:17.339203 sshd[8592]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:17.338000 audit[8592]: USER_END pid=8592 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:17.338000 audit[8592]: CRED_DISP pid=8592 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:17.340930 systemd[1]: sshd@20-139.178.90.135:22-139.178.68.195:56434.service: Deactivated successfully. Sep 6 01:07:17.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.90.135:22-139.178.68.195:56434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:17.341782 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 01:07:17.341814 systemd-logind[1704]: Session 23 logged out. Waiting for processes to exit. Sep 6 01:07:17.342402 systemd-logind[1704]: Removed session 23. Sep 6 01:07:22.180000 audit[8623]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=8623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:07:22.208313 kernel: kauditd_printk_skb: 57 callbacks suppressed Sep 6 01:07:22.208375 kernel: audit: type=1325 audit(1757120842.180:528): table=filter:138 family=2 entries=20 op=nft_register_rule pid=8623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:07:22.180000 audit[8623]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fffa395f270 a2=0 a3=7fffa395f25c items=0 ppid=2820 pid=8623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:22.341720 systemd[1]: Started sshd@21-139.178.90.135:22-139.178.68.195:36454.service. Sep 6 01:07:22.362630 kernel: audit: type=1300 audit(1757120842.180:528): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fffa395f270 a2=0 a3=7fffa395f25c items=0 ppid=2820 pid=8623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:22.362683 kernel: audit: type=1327 audit(1757120842.180:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:07:22.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:07:22.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.90.135:22-139.178.68.195:36454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:22.445124 sshd[8624]: Accepted publickey for core from 139.178.68.195 port 36454 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:22.447404 sshd[8624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:22.449900 systemd-logind[1704]: New session 24 of user core. Sep 6 01:07:22.450351 systemd[1]: Started session-24.scope. Sep 6 01:07:22.508707 kernel: audit: type=1130 audit(1757120842.340:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.90.135:22-139.178.68.195:36454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:22.508795 kernel: audit: type=1325 audit(1757120842.420:530): table=nat:139 family=2 entries=110 op=nft_register_chain pid=8623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:07:22.420000 audit[8623]: NETFILTER_CFG table=nat:139 family=2 entries=110 op=nft_register_chain pid=8623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 6 01:07:22.526195 sshd[8624]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:22.527664 systemd[1]: sshd@21-139.178.90.135:22-139.178.68.195:36454.service: Deactivated successfully. Sep 6 01:07:22.528283 systemd-logind[1704]: Session 24 logged out. Waiting for processes to exit. Sep 6 01:07:22.528294 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 01:07:22.528778 systemd-logind[1704]: Removed session 24. Sep 6 01:07:22.420000 audit[8623]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fffa395f270 a2=0 a3=7fffa395f25c items=0 ppid=2820 pid=8623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:22.664286 kernel: audit: type=1300 audit(1757120842.420:530): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7fffa395f270 a2=0 a3=7fffa395f25c items=0 ppid=2820 pid=8623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:22.664395 kernel: audit: type=1327 audit(1757120842.420:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:07:22.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 6 01:07:22.443000 audit[8624]: USER_ACCT pid=8624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:22.814298 kernel: audit: type=1101 audit(1757120842.443:531): pid=8624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:22.814355 kernel: audit: type=1103 audit(1757120842.445:532): pid=8624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:22.445000 audit[8624]: CRED_ACQ pid=8624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:22.964452 kernel: audit: type=1006 audit(1757120842.445:533): pid=8624 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Sep 6 01:07:22.445000 audit[8624]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd94de660 a2=3 a3=0 items=0 ppid=1 pid=8624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:22.445000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:22.451000 audit[8624]: USER_START pid=8624 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:22.451000 audit[8627]: CRED_ACQ pid=8627 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:22.525000 audit[8624]: USER_END pid=8624 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:22.525000 audit[8624]: CRED_DISP pid=8624 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:22.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.90.135:22-139.178.68.195:36454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:27.532550 systemd[1]: Started sshd@22-139.178.90.135:22-139.178.68.195:36466.service. Sep 6 01:07:27.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.90.135:22-139.178.68.195:36466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:27.559556 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 6 01:07:27.559596 kernel: audit: type=1130 audit(1757120847.531:539): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.90.135:22-139.178.68.195:36466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:27.674000 audit[8652]: USER_ACCT pid=8652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:27.674876 sshd[8652]: Accepted publickey for core from 139.178.68.195 port 36466 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:27.676247 sshd[8652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:27.678915 systemd-logind[1704]: New session 25 of user core. Sep 6 01:07:27.679389 systemd[1]: Started session-25.scope. Sep 6 01:07:27.756613 sshd[8652]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:27.758082 systemd[1]: sshd@22-139.178.90.135:22-139.178.68.195:36466.service: Deactivated successfully. Sep 6 01:07:27.758710 systemd-logind[1704]: Session 25 logged out. Waiting for processes to exit. Sep 6 01:07:27.758725 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 01:07:27.759200 systemd-logind[1704]: Removed session 25. Sep 6 01:07:27.674000 audit[8652]: CRED_ACQ pid=8652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:27.857185 kernel: audit: type=1101 audit(1757120847.674:540): pid=8652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:27.857220 kernel: audit: type=1103 audit(1757120847.674:541): pid=8652 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:27.857239 kernel: audit: type=1006 audit(1757120847.674:542): pid=8652 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 6 01:07:27.916109 kernel: audit: type=1300 audit(1757120847.674:542): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1b4b2f10 a2=3 a3=0 items=0 ppid=1 pid=8652 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:27.674000 audit[8652]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1b4b2f10 a2=3 a3=0 items=0 ppid=1 pid=8652 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:28.008730 kernel: audit: type=1327 audit(1757120847.674:542): proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:27.674000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:28.039401 kernel: audit: type=1105 audit(1757120847.679:543): pid=8652 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:27.679000 audit[8652]: USER_START pid=8652 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:28.134500 kernel: audit: type=1103 audit(1757120847.680:544): pid=8655 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:27.680000 audit[8655]: CRED_ACQ pid=8655 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:28.223622 kernel: audit: type=1106 audit(1757120847.755:545): pid=8652 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:27.755000 audit[8652]: USER_END pid=8652 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:28.319737 kernel: audit: type=1104 audit(1757120847.756:546): pid=8652 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:27.756000 audit[8652]: CRED_DISP pid=8652 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:27.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.90.135:22-139.178.68.195:36466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:32.763414 systemd[1]: Started sshd@23-139.178.90.135:22-139.178.68.195:49176.service. Sep 6 01:07:32.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.90.135:22-139.178.68.195:49176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:32.805918 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:07:32.806005 kernel: audit: type=1130 audit(1757120852.762:548): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.90.135:22-139.178.68.195:49176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:32.919000 audit[8716]: USER_ACCT pid=8716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:32.920620 sshd[8716]: Accepted publickey for core from 139.178.68.195 port 49176 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:32.921792 sshd[8716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:32.924377 systemd-logind[1704]: New session 26 of user core. Sep 6 01:07:32.924835 systemd[1]: Started session-26.scope. Sep 6 01:07:33.001545 sshd[8716]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:33.002903 systemd[1]: sshd@23-139.178.90.135:22-139.178.68.195:49176.service: Deactivated successfully. Sep 6 01:07:33.003515 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 01:07:33.003574 systemd-logind[1704]: Session 26 logged out. Waiting for processes to exit. Sep 6 01:07:33.004119 systemd-logind[1704]: Removed session 26. Sep 6 01:07:32.920000 audit[8716]: CRED_ACQ pid=8716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:33.102944 kernel: audit: type=1101 audit(1757120852.919:549): pid=8716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:33.103003 kernel: audit: type=1103 audit(1757120852.920:550): pid=8716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:33.103020 kernel: audit: type=1006 audit(1757120852.920:551): pid=8716 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 6 01:07:33.161504 kernel: audit: type=1300 audit(1757120852.920:551): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb9cb65e0 a2=3 a3=0 items=0 ppid=1 pid=8716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:32.920000 audit[8716]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb9cb65e0 a2=3 a3=0 items=0 ppid=1 pid=8716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:33.253458 kernel: audit: type=1327 audit(1757120852.920:551): proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:32.920000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:33.283918 kernel: audit: type=1105 audit(1757120852.925:552): pid=8716 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:32.925000 audit[8716]: USER_START pid=8716 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:33.378319 kernel: audit: type=1103 audit(1757120852.926:553): pid=8719 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:32.926000 audit[8719]: CRED_ACQ pid=8719 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:33.467572 kernel: audit: type=1106 audit(1757120853.000:554): pid=8716 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:33.000000 audit[8716]: USER_END pid=8716 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:33.562992 kernel: audit: type=1104 audit(1757120853.000:555): pid=8716 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:33.000000 audit[8716]: CRED_DISP pid=8716 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:33.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.90.135:22-139.178.68.195:49176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:38.008596 systemd[1]: Started sshd@24-139.178.90.135:22-139.178.68.195:49182.service. Sep 6 01:07:38.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.90.135:22-139.178.68.195:49182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:38.035758 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:07:38.035826 kernel: audit: type=1130 audit(1757120858.007:557): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.90.135:22-139.178.68.195:49182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:38.148000 audit[8831]: USER_ACCT pid=8831 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.150297 sshd[8831]: Accepted publickey for core from 139.178.68.195 port 49182 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:38.153909 sshd[8831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:38.163554 systemd-logind[1704]: New session 27 of user core. Sep 6 01:07:38.165832 systemd[1]: Started session-27.scope. Sep 6 01:07:38.151000 audit[8831]: CRED_ACQ pid=8831 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.250217 sshd[8831]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:38.251508 systemd[1]: sshd@24-139.178.90.135:22-139.178.68.195:49182.service: Deactivated successfully. Sep 6 01:07:38.252135 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 01:07:38.252136 systemd-logind[1704]: Session 27 logged out. Waiting for processes to exit. Sep 6 01:07:38.252661 systemd-logind[1704]: Removed session 27. Sep 6 01:07:38.331968 kernel: audit: type=1101 audit(1757120858.148:558): pid=8831 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.332007 kernel: audit: type=1103 audit(1757120858.151:559): pid=8831 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.332024 kernel: audit: type=1006 audit(1757120858.151:560): pid=8831 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 6 01:07:38.390433 kernel: audit: type=1300 audit(1757120858.151:560): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca21ff1b0 a2=3 a3=0 items=0 ppid=1 pid=8831 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:38.151000 audit[8831]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca21ff1b0 a2=3 a3=0 items=0 ppid=1 pid=8831 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:38.482474 kernel: audit: type=1327 audit(1757120858.151:560): proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:38.151000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:38.512873 kernel: audit: type=1105 audit(1757120858.173:561): pid=8831 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.173000 audit[8831]: USER_START pid=8831 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.607180 kernel: audit: type=1103 audit(1757120858.175:562): pid=8834 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.175000 audit[8834]: CRED_ACQ pid=8834 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.696220 kernel: audit: type=1106 audit(1757120858.249:563): pid=8831 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.249000 audit[8831]: USER_END pid=8831 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.791507 kernel: audit: type=1104 audit(1757120858.249:564): pid=8831 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.249000 audit[8831]: CRED_DISP pid=8831 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:38.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.90.135:22-139.178.68.195:49182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:43.256381 systemd[1]: Started sshd@25-139.178.90.135:22-139.178.68.195:36346.service. Sep 6 01:07:43.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.90.135:22-139.178.68.195:36346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:43.283582 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:07:43.283667 kernel: audit: type=1130 audit(1757120863.255:566): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.90.135:22-139.178.68.195:36346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:07:43.397747 sshd[8858]: Accepted publickey for core from 139.178.68.195 port 36346 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:07:43.396000 audit[8858]: USER_ACCT pid=8858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.398750 sshd[8858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:07:43.401244 systemd-logind[1704]: New session 28 of user core. Sep 6 01:07:43.401702 systemd[1]: Started session-28.scope. Sep 6 01:07:43.479692 sshd[8858]: pam_unix(sshd:session): session closed for user core Sep 6 01:07:43.481153 systemd[1]: sshd@25-139.178.90.135:22-139.178.68.195:36346.service: Deactivated successfully. Sep 6 01:07:43.481816 systemd[1]: session-28.scope: Deactivated successfully. Sep 6 01:07:43.481847 systemd-logind[1704]: Session 28 logged out. Waiting for processes to exit. Sep 6 01:07:43.482329 systemd-logind[1704]: Removed session 28. Sep 6 01:07:43.397000 audit[8858]: CRED_ACQ pid=8858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.579500 kernel: audit: type=1101 audit(1757120863.396:567): pid=8858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.579537 kernel: audit: type=1103 audit(1757120863.397:568): pid=8858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.579556 kernel: audit: type=1006 audit(1757120863.397:569): pid=8858 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Sep 6 01:07:43.637899 kernel: audit: type=1300 audit(1757120863.397:569): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6ab46520 a2=3 a3=0 items=0 ppid=1 pid=8858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:43.397000 audit[8858]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6ab46520 a2=3 a3=0 items=0 ppid=1 pid=8858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:07:43.729721 kernel: audit: type=1327 audit(1757120863.397:569): proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:43.397000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 6 01:07:43.760123 kernel: audit: type=1105 audit(1757120863.402:570): pid=8858 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.402000 audit[8858]: USER_START pid=8858 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.854786 kernel: audit: type=1103 audit(1757120863.403:571): pid=8861 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.403000 audit[8861]: CRED_ACQ pid=8861 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.944126 kernel: audit: type=1106 audit(1757120863.478:572): pid=8858 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.478000 audit[8858]: USER_END pid=8858 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:44.039467 kernel: audit: type=1104 audit(1757120863.479:573): pid=8858 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.479000 audit[8858]: CRED_DISP pid=8858 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Sep 6 01:07:43.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.90.135:22-139.178.68.195:36346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'