Sep 13 00:53:47.554392 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Sep 13 00:53:47.554406 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:53:47.554412 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:47.554419 kernel: BIOS-provided physical RAM map: Sep 13 00:53:47.554423 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 13 00:53:47.554427 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 13 00:53:47.554431 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 13 00:53:47.554436 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 13 00:53:47.554440 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 13 00:53:47.554444 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfbdfff] usable Sep 13 00:53:47.554448 kernel: BIOS-e820: [mem 0x000000006dfbe000-0x000000006dfbefff] ACPI NVS Sep 13 00:53:47.554452 kernel: BIOS-e820: [mem 0x000000006dfbf000-0x000000006dfbffff] reserved Sep 13 00:53:47.554456 kernel: BIOS-e820: [mem 0x000000006dfc0000-0x0000000077fc6fff] usable Sep 13 00:53:47.554460 kernel: BIOS-e820: [mem 0x0000000077fc7000-0x00000000790a9fff] reserved Sep 13 00:53:47.554466 kernel: BIOS-e820: [mem 0x00000000790aa000-0x0000000079232fff] usable Sep 13 00:53:47.554470 kernel: BIOS-e820: [mem 0x0000000079233000-0x0000000079664fff] ACPI NVS Sep 13 00:53:47.554475 kernel: BIOS-e820: [mem 0x0000000079665000-0x000000007befefff] reserved Sep 13 00:53:47.554479 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Sep 13 00:53:47.554483 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Sep 13 00:53:47.554487 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 00:53:47.554492 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 13 00:53:47.554496 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 13 00:53:47.554500 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 00:53:47.554505 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 13 00:53:47.554509 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Sep 13 00:53:47.554514 kernel: NX (Execute Disable) protection: active Sep 13 00:53:47.554518 kernel: SMBIOS 3.2.1 present. Sep 13 00:53:47.554522 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Sep 13 00:53:47.554527 kernel: tsc: Detected 3400.000 MHz processor Sep 13 00:53:47.554531 kernel: tsc: Detected 3399.906 MHz TSC Sep 13 00:53:47.554535 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:53:47.554540 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:53:47.554545 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Sep 13 00:53:47.554550 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:53:47.554555 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Sep 13 00:53:47.554559 kernel: Using GB pages for direct mapping Sep 13 00:53:47.554564 kernel: ACPI: Early table checksum verification disabled Sep 13 00:53:47.554568 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 13 00:53:47.554572 kernel: ACPI: XSDT 0x00000000795460C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 13 00:53:47.554577 kernel: ACPI: FACP 0x0000000079582620 000114 (v06 01072009 AMI 00010013) Sep 13 00:53:47.554583 kernel: ACPI: DSDT 0x0000000079546268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 13 00:53:47.554589 kernel: ACPI: FACS 0x0000000079664F80 000040 Sep 13 00:53:47.554594 kernel: ACPI: APIC 0x0000000079582738 00012C (v04 01072009 AMI 00010013) Sep 13 00:53:47.554599 kernel: ACPI: FPDT 0x0000000079582868 000044 (v01 01072009 AMI 00010013) Sep 13 00:53:47.554603 kernel: ACPI: FIDT 0x00000000795828B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 13 00:53:47.554608 kernel: ACPI: MCFG 0x0000000079582950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 13 00:53:47.554613 kernel: ACPI: SPMI 0x0000000079582990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 13 00:53:47.554618 kernel: ACPI: SSDT 0x00000000795829D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 13 00:53:47.554623 kernel: ACPI: SSDT 0x00000000795844F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 13 00:53:47.554628 kernel: ACPI: SSDT 0x00000000795876C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 13 00:53:47.554633 kernel: ACPI: HPET 0x00000000795899F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:53:47.554638 kernel: ACPI: SSDT 0x0000000079589A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 13 00:53:47.554642 kernel: ACPI: SSDT 0x000000007958A9D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 13 00:53:47.554647 kernel: ACPI: UEFI 0x000000007958B2D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:53:47.554652 kernel: ACPI: LPIT 0x000000007958B318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:53:47.554657 kernel: ACPI: SSDT 0x000000007958B3B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 13 00:53:47.554662 kernel: ACPI: SSDT 0x000000007958DB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 13 00:53:47.554667 kernel: ACPI: DBGP 0x000000007958F078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:53:47.554672 kernel: ACPI: DBG2 0x000000007958F0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:53:47.554676 kernel: ACPI: SSDT 0x000000007958F108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 13 00:53:47.554681 kernel: ACPI: DMAR 0x0000000079590C70 0000A8 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:53:47.554686 kernel: ACPI: SSDT 0x0000000079590D18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 13 00:53:47.554691 kernel: ACPI: TPM2 0x0000000079590E60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 13 00:53:47.554696 kernel: ACPI: SSDT 0x0000000079590E98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 13 00:53:47.554702 kernel: ACPI: WSMT 0x0000000079591C28 000028 (v01 \xf5m 01072009 AMI 00010013) Sep 13 00:53:47.554707 kernel: ACPI: EINJ 0x0000000079591C50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 13 00:53:47.554711 kernel: ACPI: ERST 0x0000000079591D80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 13 00:53:47.554716 kernel: ACPI: BERT 0x0000000079591FB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 13 00:53:47.554721 kernel: ACPI: HEST 0x0000000079591FE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 13 00:53:47.554726 kernel: ACPI: SSDT 0x0000000079592260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 13 00:53:47.554731 kernel: ACPI: Reserving FACP table memory at [mem 0x79582620-0x79582733] Sep 13 00:53:47.554735 kernel: ACPI: Reserving DSDT table memory at [mem 0x79546268-0x7958261e] Sep 13 00:53:47.554740 kernel: ACPI: Reserving FACS table memory at [mem 0x79664f80-0x79664fbf] Sep 13 00:53:47.554746 kernel: ACPI: Reserving APIC table memory at [mem 0x79582738-0x79582863] Sep 13 00:53:47.554751 kernel: ACPI: Reserving FPDT table memory at [mem 0x79582868-0x795828ab] Sep 13 00:53:47.554756 kernel: ACPI: Reserving FIDT table memory at [mem 0x795828b0-0x7958294b] Sep 13 00:53:47.554760 kernel: ACPI: Reserving MCFG table memory at [mem 0x79582950-0x7958298b] Sep 13 00:53:47.554765 kernel: ACPI: Reserving SPMI table memory at [mem 0x79582990-0x795829d0] Sep 13 00:53:47.554770 kernel: ACPI: Reserving SSDT table memory at [mem 0x795829d8-0x795844f3] Sep 13 00:53:47.554775 kernel: ACPI: Reserving SSDT table memory at [mem 0x795844f8-0x795876bd] Sep 13 00:53:47.554779 kernel: ACPI: Reserving SSDT table memory at [mem 0x795876c0-0x795899ea] Sep 13 00:53:47.554784 kernel: ACPI: Reserving HPET table memory at [mem 0x795899f0-0x79589a27] Sep 13 00:53:47.554790 kernel: ACPI: Reserving SSDT table memory at [mem 0x79589a28-0x7958a9d5] Sep 13 00:53:47.554795 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958a9d8-0x7958b2ce] Sep 13 00:53:47.554799 kernel: ACPI: Reserving UEFI table memory at [mem 0x7958b2d0-0x7958b311] Sep 13 00:53:47.554804 kernel: ACPI: Reserving LPIT table memory at [mem 0x7958b318-0x7958b3ab] Sep 13 00:53:47.554809 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958b3b0-0x7958db8d] Sep 13 00:53:47.554814 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958db90-0x7958f071] Sep 13 00:53:47.554818 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958f078-0x7958f0ab] Sep 13 00:53:47.554823 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958f0b0-0x7958f103] Sep 13 00:53:47.554828 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958f108-0x79590c6e] Sep 13 00:53:47.554834 kernel: ACPI: Reserving DMAR table memory at [mem 0x79590c70-0x79590d17] Sep 13 00:53:47.554838 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590d18-0x79590e5b] Sep 13 00:53:47.554843 kernel: ACPI: Reserving TPM2 table memory at [mem 0x79590e60-0x79590e93] Sep 13 00:53:47.554848 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590e98-0x79591c26] Sep 13 00:53:47.554853 kernel: ACPI: Reserving WSMT table memory at [mem 0x79591c28-0x79591c4f] Sep 13 00:53:47.554857 kernel: ACPI: Reserving EINJ table memory at [mem 0x79591c50-0x79591d7f] Sep 13 00:53:47.554862 kernel: ACPI: Reserving ERST table memory at [mem 0x79591d80-0x79591faf] Sep 13 00:53:47.554867 kernel: ACPI: Reserving BERT table memory at [mem 0x79591fb0-0x79591fdf] Sep 13 00:53:47.554872 kernel: ACPI: Reserving HEST table memory at [mem 0x79591fe0-0x7959225b] Sep 13 00:53:47.554878 kernel: ACPI: Reserving SSDT table memory at [mem 0x79592260-0x795923c1] Sep 13 00:53:47.554882 kernel: No NUMA configuration found Sep 13 00:53:47.554887 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Sep 13 00:53:47.554892 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Sep 13 00:53:47.554897 kernel: Zone ranges: Sep 13 00:53:47.554902 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:53:47.554907 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:53:47.554911 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Sep 13 00:53:47.554916 kernel: Movable zone start for each node Sep 13 00:53:47.554922 kernel: Early memory node ranges Sep 13 00:53:47.554927 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 13 00:53:47.554931 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 13 00:53:47.554936 kernel: node 0: [mem 0x0000000040400000-0x000000006dfbdfff] Sep 13 00:53:47.554941 kernel: node 0: [mem 0x000000006dfc0000-0x0000000077fc6fff] Sep 13 00:53:47.554946 kernel: node 0: [mem 0x00000000790aa000-0x0000000079232fff] Sep 13 00:53:47.554950 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Sep 13 00:53:47.554955 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Sep 13 00:53:47.554960 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Sep 13 00:53:47.554969 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:53:47.554974 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 13 00:53:47.554979 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 13 00:53:47.554986 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 13 00:53:47.554991 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Sep 13 00:53:47.554996 kernel: On node 0, zone DMA32: 11468 pages in unavailable ranges Sep 13 00:53:47.555001 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Sep 13 00:53:47.555006 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Sep 13 00:53:47.555012 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 13 00:53:47.555017 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 00:53:47.555023 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 00:53:47.555028 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 00:53:47.555033 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 00:53:47.555038 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 00:53:47.555043 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 00:53:47.555048 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 00:53:47.555053 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 00:53:47.555059 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 00:53:47.555064 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 00:53:47.555070 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 00:53:47.555075 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 00:53:47.555080 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 00:53:47.555085 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 00:53:47.555090 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 00:53:47.555095 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 00:53:47.555100 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 13 00:53:47.555105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:53:47.555111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:53:47.555117 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:53:47.555122 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:53:47.555127 kernel: TSC deadline timer available Sep 13 00:53:47.555132 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 13 00:53:47.555137 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Sep 13 00:53:47.555142 kernel: Booting paravirtualized kernel on bare hardware Sep 13 00:53:47.555147 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:53:47.555154 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Sep 13 00:53:47.555159 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Sep 13 00:53:47.555164 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Sep 13 00:53:47.555169 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 13 00:53:47.555174 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222329 Sep 13 00:53:47.555179 kernel: Policy zone: Normal Sep 13 00:53:47.555185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:47.555191 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:53:47.555196 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 13 00:53:47.555202 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 13 00:53:47.555207 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:53:47.555212 kernel: Memory: 32681620K/33411996K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 730116K reserved, 0K cma-reserved) Sep 13 00:53:47.555218 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 13 00:53:47.555223 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:53:47.555228 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:53:47.555233 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:53:47.555238 kernel: rcu: RCU event tracing is enabled. Sep 13 00:53:47.555244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 13 00:53:47.555250 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:53:47.555255 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:53:47.555260 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:53:47.555265 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 13 00:53:47.555270 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 13 00:53:47.555276 kernel: random: crng init done Sep 13 00:53:47.555281 kernel: Console: colour dummy device 80x25 Sep 13 00:53:47.555286 kernel: printk: console [tty0] enabled Sep 13 00:53:47.555292 kernel: printk: console [ttyS1] enabled Sep 13 00:53:47.555297 kernel: ACPI: Core revision 20210730 Sep 13 00:53:47.555302 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Sep 13 00:53:47.555308 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:53:47.555313 kernel: DMAR: Host address width 39 Sep 13 00:53:47.555318 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Sep 13 00:53:47.555323 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Sep 13 00:53:47.555328 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 13 00:53:47.555333 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 13 00:53:47.555339 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Sep 13 00:53:47.555345 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Sep 13 00:53:47.555350 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Sep 13 00:53:47.555355 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 13 00:53:47.555360 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 13 00:53:47.555365 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 13 00:53:47.555370 kernel: x2apic enabled Sep 13 00:53:47.555375 kernel: Switched APIC routing to cluster x2apic. Sep 13 00:53:47.555381 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:53:47.555386 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 13 00:53:47.555392 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 13 00:53:47.555397 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 13 00:53:47.555402 kernel: process: using mwait in idle threads Sep 13 00:53:47.555407 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:53:47.555412 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:53:47.555419 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:53:47.555425 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 00:53:47.555430 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:53:47.555436 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:53:47.555441 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 00:53:47.555446 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 00:53:47.555452 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 00:53:47.555457 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:53:47.555462 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:53:47.555467 kernel: TAA: Mitigation: TSX disabled Sep 13 00:53:47.555472 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 13 00:53:47.555477 kernel: SRBDS: Mitigation: Microcode Sep 13 00:53:47.555484 kernel: GDS: Vulnerable: No microcode Sep 13 00:53:47.555489 kernel: active return thunk: its_return_thunk Sep 13 00:53:47.555494 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:53:47.555499 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:53:47.555504 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:53:47.555509 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:53:47.555514 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:53:47.555520 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:53:47.555525 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:53:47.555531 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:53:47.555536 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:53:47.555541 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 13 00:53:47.555546 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:53:47.555551 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:53:47.555557 kernel: LSM: Security Framework initializing Sep 13 00:53:47.555562 kernel: SELinux: Initializing. Sep 13 00:53:47.555567 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:53:47.555572 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:53:47.555578 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 13 00:53:47.555583 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 00:53:47.555588 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 13 00:53:47.555594 kernel: ... version: 4 Sep 13 00:53:47.555599 kernel: ... bit width: 48 Sep 13 00:53:47.555604 kernel: ... generic registers: 4 Sep 13 00:53:47.555609 kernel: ... value mask: 0000ffffffffffff Sep 13 00:53:47.555614 kernel: ... max period: 00007fffffffffff Sep 13 00:53:47.555619 kernel: ... fixed-purpose events: 3 Sep 13 00:53:47.555625 kernel: ... event mask: 000000070000000f Sep 13 00:53:47.555631 kernel: signal: max sigframe size: 2032 Sep 13 00:53:47.555636 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:53:47.555641 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 13 00:53:47.555646 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:53:47.555651 kernel: x86: Booting SMP configuration: Sep 13 00:53:47.555656 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Sep 13 00:53:47.555662 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:53:47.555667 kernel: #9 #10 #11 #12 #13 #14 #15 Sep 13 00:53:47.555673 kernel: smp: Brought up 1 node, 16 CPUs Sep 13 00:53:47.555678 kernel: smpboot: Max logical packages: 1 Sep 13 00:53:47.555683 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 13 00:53:47.555689 kernel: devtmpfs: initialized Sep 13 00:53:47.555694 kernel: x86/mm: Memory block size: 128MB Sep 13 00:53:47.555699 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfbe000-0x6dfbefff] (4096 bytes) Sep 13 00:53:47.555704 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79233000-0x79664fff] (4399104 bytes) Sep 13 00:53:47.555709 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:53:47.555715 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 13 00:53:47.555720 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:53:47.555725 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:53:47.555731 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:53:47.555736 kernel: audit: type=2000 audit(1757724822.132:1): state=initialized audit_enabled=0 res=1 Sep 13 00:53:47.555741 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:53:47.555746 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:53:47.555751 kernel: cpuidle: using governor menu Sep 13 00:53:47.555756 kernel: ACPI: bus type PCI registered Sep 13 00:53:47.555762 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:53:47.555767 kernel: dca service started, version 1.12.1 Sep 13 00:53:47.555772 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 13 00:53:47.555778 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Sep 13 00:53:47.555783 kernel: PCI: Using configuration type 1 for base access Sep 13 00:53:47.555788 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 13 00:53:47.555793 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:53:47.555798 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:53:47.555803 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:53:47.555809 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:53:47.555814 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:53:47.555819 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:53:47.555825 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:53:47.555830 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:53:47.555835 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:53:47.555840 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 13 00:53:47.555845 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:53:47.555850 kernel: ACPI: SSDT 0xFFFFA0490021D600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 13 00:53:47.555857 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Sep 13 00:53:47.555862 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:53:47.555867 kernel: ACPI: SSDT 0xFFFFA04901C5A800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 13 00:53:47.555872 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:53:47.555877 kernel: ACPI: SSDT 0xFFFFA04901D4D800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 13 00:53:47.555882 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:53:47.555887 kernel: ACPI: SSDT 0xFFFFA04900148000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 13 00:53:47.555893 kernel: ACPI: Interpreter enabled Sep 13 00:53:47.555898 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:53:47.555903 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:53:47.555909 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 13 00:53:47.555914 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 13 00:53:47.555919 kernel: HEST: Table parsing has been initialized. Sep 13 00:53:47.555924 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 13 00:53:47.555930 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:53:47.555935 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 13 00:53:47.555940 kernel: ACPI: PM: Power Resource [USBC] Sep 13 00:53:47.555945 kernel: ACPI: PM: Power Resource [V0PR] Sep 13 00:53:47.555950 kernel: ACPI: PM: Power Resource [V1PR] Sep 13 00:53:47.555956 kernel: ACPI: PM: Power Resource [V2PR] Sep 13 00:53:47.555961 kernel: ACPI: PM: Power Resource [WRST] Sep 13 00:53:47.555966 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 13 00:53:47.555972 kernel: ACPI: PM: Power Resource [FN00] Sep 13 00:53:47.555977 kernel: ACPI: PM: Power Resource [FN01] Sep 13 00:53:47.555982 kernel: ACPI: PM: Power Resource [FN02] Sep 13 00:53:47.555987 kernel: ACPI: PM: Power Resource [FN03] Sep 13 00:53:47.555992 kernel: ACPI: PM: Power Resource [FN04] Sep 13 00:53:47.555997 kernel: ACPI: PM: Power Resource [PIN] Sep 13 00:53:47.556003 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 13 00:53:47.556069 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:53:47.556116 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 13 00:53:47.556159 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 13 00:53:47.556167 kernel: PCI host bridge to bus 0000:00 Sep 13 00:53:47.556213 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:53:47.556253 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:53:47.556294 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:53:47.556332 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Sep 13 00:53:47.556371 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 13 00:53:47.556409 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 13 00:53:47.556464 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 13 00:53:47.556518 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 13 00:53:47.556566 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 13 00:53:47.556616 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Sep 13 00:53:47.556660 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Sep 13 00:53:47.556709 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Sep 13 00:53:47.556753 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Sep 13 00:53:47.556797 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Sep 13 00:53:47.556842 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Sep 13 00:53:47.556892 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 13 00:53:47.556937 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Sep 13 00:53:47.556985 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 13 00:53:47.557029 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Sep 13 00:53:47.557077 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 13 00:53:47.557121 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Sep 13 00:53:47.557167 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 13 00:53:47.557216 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 13 00:53:47.557260 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Sep 13 00:53:47.557304 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Sep 13 00:53:47.557352 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 13 00:53:47.557395 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 00:53:47.557447 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 13 00:53:47.557491 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 00:53:47.557538 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 13 00:53:47.557582 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Sep 13 00:53:47.557625 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 13 00:53:47.557672 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 13 00:53:47.557722 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Sep 13 00:53:47.557767 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 13 00:53:47.557815 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 13 00:53:47.557859 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Sep 13 00:53:47.557903 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 13 00:53:47.557950 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 13 00:53:47.557995 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Sep 13 00:53:47.558039 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Sep 13 00:53:47.558083 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Sep 13 00:53:47.558126 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Sep 13 00:53:47.558170 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Sep 13 00:53:47.558213 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Sep 13 00:53:47.558256 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 13 00:53:47.558307 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 13 00:53:47.558354 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 13 00:53:47.558405 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 13 00:53:47.558452 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 13 00:53:47.558501 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 13 00:53:47.558547 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 13 00:53:47.558595 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 13 00:53:47.558641 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 13 00:53:47.558689 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Sep 13 00:53:47.558734 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Sep 13 00:53:47.558782 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 13 00:53:47.558829 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 00:53:47.558879 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 13 00:53:47.558927 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 13 00:53:47.558971 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Sep 13 00:53:47.559014 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 13 00:53:47.559063 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 13 00:53:47.559108 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 13 00:53:47.559153 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:53:47.559202 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Sep 13 00:53:47.559249 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 13 00:53:47.559295 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Sep 13 00:53:47.559340 kernel: pci 0000:02:00.0: PME# supported from D3cold Sep 13 00:53:47.559385 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 00:53:47.559437 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 00:53:47.559488 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Sep 13 00:53:47.559535 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 13 00:53:47.559581 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Sep 13 00:53:47.559626 kernel: pci 0000:02:00.1: PME# supported from D3cold Sep 13 00:53:47.559692 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 00:53:47.559736 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 00:53:47.559784 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 13 00:53:47.559828 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Sep 13 00:53:47.559871 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 00:53:47.559915 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 13 00:53:47.559964 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 13 00:53:47.560009 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 13 00:53:47.560101 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Sep 13 00:53:47.560168 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Sep 13 00:53:47.560213 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Sep 13 00:53:47.560257 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 13 00:53:47.560301 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 13 00:53:47.560344 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 00:53:47.560388 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Sep 13 00:53:47.560481 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Sep 13 00:53:47.560526 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Sep 13 00:53:47.560574 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Sep 13 00:53:47.560619 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Sep 13 00:53:47.560664 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Sep 13 00:53:47.560710 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Sep 13 00:53:47.560754 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 13 00:53:47.560797 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 00:53:47.560840 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Sep 13 00:53:47.560886 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 13 00:53:47.560935 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Sep 13 00:53:47.560981 kernel: pci 0000:07:00.0: enabling Extended Tags Sep 13 00:53:47.561025 kernel: pci 0000:07:00.0: supports D1 D2 Sep 13 00:53:47.561070 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:53:47.561114 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 13 00:53:47.561157 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 13 00:53:47.561201 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Sep 13 00:53:47.561250 kernel: pci_bus 0000:08: extended config space not accessible Sep 13 00:53:47.561304 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Sep 13 00:53:47.561352 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Sep 13 00:53:47.561400 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Sep 13 00:53:47.561491 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Sep 13 00:53:47.561539 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:53:47.561586 kernel: pci 0000:08:00.0: supports D1 D2 Sep 13 00:53:47.561636 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:53:47.561681 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 13 00:53:47.561727 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 13 00:53:47.561771 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Sep 13 00:53:47.561779 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 13 00:53:47.561785 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 13 00:53:47.561790 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 13 00:53:47.561796 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 13 00:53:47.561803 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 13 00:53:47.561809 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 13 00:53:47.561814 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 13 00:53:47.561819 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 13 00:53:47.561825 kernel: iommu: Default domain type: Translated Sep 13 00:53:47.561830 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:53:47.561879 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Sep 13 00:53:47.561926 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:53:47.561974 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Sep 13 00:53:47.561983 kernel: vgaarb: loaded Sep 13 00:53:47.561988 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:53:47.561994 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:53:47.561999 kernel: PTP clock support registered Sep 13 00:53:47.562005 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:53:47.562010 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:53:47.562016 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 13 00:53:47.562021 kernel: e820: reserve RAM buffer [mem 0x6dfbe000-0x6fffffff] Sep 13 00:53:47.562026 kernel: e820: reserve RAM buffer [mem 0x77fc7000-0x77ffffff] Sep 13 00:53:47.562032 kernel: e820: reserve RAM buffer [mem 0x79233000-0x7bffffff] Sep 13 00:53:47.562038 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Sep 13 00:53:47.562043 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Sep 13 00:53:47.562048 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:53:47.562054 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Sep 13 00:53:47.562059 kernel: clocksource: Switched to clocksource tsc-early Sep 13 00:53:47.562064 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:53:47.562070 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:53:47.562075 kernel: pnp: PnP ACPI init Sep 13 00:53:47.562121 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 13 00:53:47.562168 kernel: pnp 00:02: [dma 0 disabled] Sep 13 00:53:47.562211 kernel: pnp 00:03: [dma 0 disabled] Sep 13 00:53:47.562254 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 13 00:53:47.562293 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 13 00:53:47.562335 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 13 00:53:47.562381 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 13 00:53:47.562444 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 13 00:53:47.562499 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 13 00:53:47.562538 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 13 00:53:47.562578 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 13 00:53:47.562616 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 13 00:53:47.562655 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 13 00:53:47.562696 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 13 00:53:47.562738 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 13 00:53:47.562778 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 13 00:53:47.562817 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 13 00:53:47.562855 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 13 00:53:47.562893 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 13 00:53:47.562932 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 13 00:53:47.562973 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 13 00:53:47.563016 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 13 00:53:47.563024 kernel: pnp: PnP ACPI: found 10 devices Sep 13 00:53:47.563030 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:53:47.563035 kernel: NET: Registered PF_INET protocol family Sep 13 00:53:47.563041 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:53:47.563046 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:53:47.563053 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:53:47.563058 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:53:47.563064 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 13 00:53:47.563069 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 13 00:53:47.563075 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:53:47.563080 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:53:47.563086 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:53:47.563091 kernel: NET: Registered PF_XDP protocol family Sep 13 00:53:47.563135 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Sep 13 00:53:47.563180 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Sep 13 00:53:47.563226 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Sep 13 00:53:47.563269 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:53:47.563316 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 00:53:47.563360 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 00:53:47.563408 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 00:53:47.563479 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 00:53:47.563524 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 13 00:53:47.563569 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Sep 13 00:53:47.563614 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 00:53:47.563659 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 13 00:53:47.563703 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 13 00:53:47.563750 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 00:53:47.563796 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Sep 13 00:53:47.563842 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 13 00:53:47.563886 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 00:53:47.563930 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Sep 13 00:53:47.563975 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 13 00:53:47.564021 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 13 00:53:47.564067 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 13 00:53:47.564113 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Sep 13 00:53:47.564159 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 13 00:53:47.564205 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 13 00:53:47.564251 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Sep 13 00:53:47.564291 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 13 00:53:47.564330 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:53:47.564370 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:53:47.564409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:53:47.564451 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Sep 13 00:53:47.564489 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 13 00:53:47.564539 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Sep 13 00:53:47.564582 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 00:53:47.564627 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Sep 13 00:53:47.564688 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Sep 13 00:53:47.564731 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 13 00:53:47.564771 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Sep 13 00:53:47.564818 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 13 00:53:47.564858 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Sep 13 00:53:47.564901 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Sep 13 00:53:47.564943 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Sep 13 00:53:47.564950 kernel: PCI: CLS 64 bytes, default 64 Sep 13 00:53:47.564955 kernel: DMAR: No ATSR found Sep 13 00:53:47.564961 kernel: DMAR: No SATC found Sep 13 00:53:47.564968 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Sep 13 00:53:47.564973 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Sep 13 00:53:47.564979 kernel: DMAR: IOMMU feature nwfs inconsistent Sep 13 00:53:47.564984 kernel: DMAR: IOMMU feature pasid inconsistent Sep 13 00:53:47.564990 kernel: DMAR: IOMMU feature eafs inconsistent Sep 13 00:53:47.564995 kernel: DMAR: IOMMU feature prs inconsistent Sep 13 00:53:47.565000 kernel: DMAR: IOMMU feature nest inconsistent Sep 13 00:53:47.565006 kernel: DMAR: IOMMU feature mts inconsistent Sep 13 00:53:47.565011 kernel: DMAR: IOMMU feature sc_support inconsistent Sep 13 00:53:47.565016 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Sep 13 00:53:47.565023 kernel: DMAR: dmar0: Using Queued invalidation Sep 13 00:53:47.565028 kernel: DMAR: dmar1: Using Queued invalidation Sep 13 00:53:47.565073 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 13 00:53:47.565118 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 13 00:53:47.565162 kernel: pci 0000:00:01.1: Adding to iommu group 1 Sep 13 00:53:47.565205 kernel: pci 0000:00:02.0: Adding to iommu group 2 Sep 13 00:53:47.565249 kernel: pci 0000:00:08.0: Adding to iommu group 3 Sep 13 00:53:47.565291 kernel: pci 0000:00:12.0: Adding to iommu group 4 Sep 13 00:53:47.565337 kernel: pci 0000:00:14.0: Adding to iommu group 5 Sep 13 00:53:47.565381 kernel: pci 0000:00:14.2: Adding to iommu group 5 Sep 13 00:53:47.565425 kernel: pci 0000:00:15.0: Adding to iommu group 6 Sep 13 00:53:47.565509 kernel: pci 0000:00:15.1: Adding to iommu group 6 Sep 13 00:53:47.565552 kernel: pci 0000:00:16.0: Adding to iommu group 7 Sep 13 00:53:47.565595 kernel: pci 0000:00:16.1: Adding to iommu group 7 Sep 13 00:53:47.565638 kernel: pci 0000:00:16.4: Adding to iommu group 7 Sep 13 00:53:47.565681 kernel: pci 0000:00:17.0: Adding to iommu group 8 Sep 13 00:53:47.565726 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Sep 13 00:53:47.565768 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Sep 13 00:53:47.565812 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Sep 13 00:53:47.565855 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Sep 13 00:53:47.565899 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Sep 13 00:53:47.565942 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Sep 13 00:53:47.565986 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Sep 13 00:53:47.566029 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Sep 13 00:53:47.566075 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Sep 13 00:53:47.566119 kernel: pci 0000:02:00.0: Adding to iommu group 1 Sep 13 00:53:47.566164 kernel: pci 0000:02:00.1: Adding to iommu group 1 Sep 13 00:53:47.566209 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 13 00:53:47.566254 kernel: pci 0000:05:00.0: Adding to iommu group 17 Sep 13 00:53:47.566299 kernel: pci 0000:07:00.0: Adding to iommu group 18 Sep 13 00:53:47.566346 kernel: pci 0000:08:00.0: Adding to iommu group 18 Sep 13 00:53:47.566353 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 13 00:53:47.566359 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:53:47.566366 kernel: software IO TLB: mapped [mem 0x0000000073fc7000-0x0000000077fc7000] (64MB) Sep 13 00:53:47.566371 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Sep 13 00:53:47.566377 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 13 00:53:47.566382 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 13 00:53:47.566388 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 13 00:53:47.566393 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Sep 13 00:53:47.566486 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 13 00:53:47.566494 kernel: Initialise system trusted keyrings Sep 13 00:53:47.566501 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 13 00:53:47.566506 kernel: Key type asymmetric registered Sep 13 00:53:47.566511 kernel: Asymmetric key parser 'x509' registered Sep 13 00:53:47.566517 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:53:47.566522 kernel: io scheduler mq-deadline registered Sep 13 00:53:47.566527 kernel: io scheduler kyber registered Sep 13 00:53:47.566533 kernel: io scheduler bfq registered Sep 13 00:53:47.566576 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Sep 13 00:53:47.566622 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Sep 13 00:53:47.566668 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Sep 13 00:53:47.566712 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Sep 13 00:53:47.566756 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Sep 13 00:53:47.566799 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Sep 13 00:53:47.566843 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Sep 13 00:53:47.566891 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 13 00:53:47.566899 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 13 00:53:47.566906 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 13 00:53:47.566912 kernel: pstore: Registered erst as persistent store backend Sep 13 00:53:47.566917 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:53:47.566923 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:53:47.566928 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:53:47.566934 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:53:47.566979 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 13 00:53:47.566987 kernel: i8042: PNP: No PS/2 controller found. Sep 13 00:53:47.567027 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 13 00:53:47.567068 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 13 00:53:47.567108 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-13T00:53:46 UTC (1757724826) Sep 13 00:53:47.567149 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 13 00:53:47.567156 kernel: intel_pstate: Intel P-state driver initializing Sep 13 00:53:47.567162 kernel: intel_pstate: Disabling energy efficiency optimization Sep 13 00:53:47.567167 kernel: intel_pstate: HWP enabled Sep 13 00:53:47.567173 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 13 00:53:47.567180 kernel: vesafb: scrolling: redraw Sep 13 00:53:47.567185 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 13 00:53:47.567191 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x00000000b6e85cad, using 768k, total 768k Sep 13 00:53:47.567196 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:53:47.567201 kernel: fb0: VESA VGA frame buffer device Sep 13 00:53:47.567207 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:53:47.567212 kernel: Segment Routing with IPv6 Sep 13 00:53:47.567218 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:53:47.567223 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:53:47.567229 kernel: Key type dns_resolver registered Sep 13 00:53:47.567235 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Sep 13 00:53:47.567240 kernel: microcode: Microcode Update Driver: v2.2. Sep 13 00:53:47.567245 kernel: IPI shorthand broadcast: enabled Sep 13 00:53:47.567251 kernel: sched_clock: Marking stable (1858771342, 1360219353)->(4643927286, -1424936591) Sep 13 00:53:47.567256 kernel: registered taskstats version 1 Sep 13 00:53:47.567262 kernel: Loading compiled-in X.509 certificates Sep 13 00:53:47.567267 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:53:47.567272 kernel: Key type .fscrypt registered Sep 13 00:53:47.567279 kernel: Key type fscrypt-provisioning registered Sep 13 00:53:47.567284 kernel: pstore: Using crash dump compression: deflate Sep 13 00:53:47.567289 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:53:47.567295 kernel: ima: No architecture policies found Sep 13 00:53:47.567300 kernel: clk: Disabling unused clocks Sep 13 00:53:47.567305 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:53:47.567311 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:53:47.567316 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:53:47.567322 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:53:47.567328 kernel: Run /init as init process Sep 13 00:53:47.567333 kernel: with arguments: Sep 13 00:53:47.567338 kernel: /init Sep 13 00:53:47.567344 kernel: with environment: Sep 13 00:53:47.567349 kernel: HOME=/ Sep 13 00:53:47.567354 kernel: TERM=linux Sep 13 00:53:47.567360 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:53:47.567366 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:53:47.567374 systemd[1]: Detected architecture x86-64. Sep 13 00:53:47.567379 systemd[1]: Running in initrd. Sep 13 00:53:47.567385 systemd[1]: No hostname configured, using default hostname. Sep 13 00:53:47.567390 systemd[1]: Hostname set to . Sep 13 00:53:47.567396 systemd[1]: Initializing machine ID from random generator. Sep 13 00:53:47.567402 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:53:47.567407 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:53:47.567413 systemd[1]: Reached target cryptsetup.target. Sep 13 00:53:47.567439 systemd[1]: Reached target paths.target. Sep 13 00:53:47.567444 systemd[1]: Reached target slices.target. Sep 13 00:53:47.567470 systemd[1]: Reached target swap.target. Sep 13 00:53:47.567475 systemd[1]: Reached target timers.target. Sep 13 00:53:47.567480 systemd[1]: Listening on iscsid.socket. Sep 13 00:53:47.567487 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:53:47.567492 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:53:47.567498 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:53:47.567504 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:53:47.567510 kernel: tsc: Refined TSC clocksource calibration: 3408.091 MHz Sep 13 00:53:47.567516 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:53:47.567521 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x312029d2519, max_idle_ns: 440795330833 ns Sep 13 00:53:47.567527 kernel: clocksource: Switched to clocksource tsc Sep 13 00:53:47.567532 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:53:47.567538 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:53:47.567543 systemd[1]: Reached target sockets.target. Sep 13 00:53:47.567550 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:53:47.567555 systemd[1]: Finished network-cleanup.service. Sep 13 00:53:47.567561 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:53:47.567566 systemd[1]: Starting systemd-journald.service... Sep 13 00:53:47.567572 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:53:47.567579 systemd-journald[270]: Journal started Sep 13 00:53:47.567606 systemd-journald[270]: Runtime Journal (/run/log/journal/8cb67e5175ab47ee9eae9330400ffad8) is 8.0M, max 639.3M, 631.3M free. Sep 13 00:53:47.569803 systemd-modules-load[271]: Inserted module 'overlay' Sep 13 00:53:47.575000 audit: BPF prog-id=6 op=LOAD Sep 13 00:53:47.593425 kernel: audit: type=1334 audit(1757724827.575:2): prog-id=6 op=LOAD Sep 13 00:53:47.593447 systemd[1]: Starting systemd-resolved.service... Sep 13 00:53:47.642454 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:53:47.642471 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:53:47.675422 kernel: Bridge firewalling registered Sep 13 00:53:47.675438 systemd[1]: Started systemd-journald.service. Sep 13 00:53:47.690240 systemd-modules-load[271]: Inserted module 'br_netfilter' Sep 13 00:53:47.739971 kernel: audit: type=1130 audit(1757724827.698:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.693023 systemd-resolved[273]: Positive Trust Anchors: Sep 13 00:53:47.804465 kernel: SCSI subsystem initialized Sep 13 00:53:47.804477 kernel: audit: type=1130 audit(1757724827.751:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.693028 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:53:47.909869 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:53:47.909881 kernel: audit: type=1130 audit(1757724827.824:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.909888 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:53:47.909895 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:53:47.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.693048 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:53:47.991610 kernel: audit: type=1130 audit(1757724827.927:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.694644 systemd-resolved[273]: Defaulting to hostname 'linux'. Sep 13 00:53:48.044482 kernel: audit: type=1130 audit(1757724827.999:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.698724 systemd[1]: Started systemd-resolved.service. Sep 13 00:53:48.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.751608 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:53:48.114635 kernel: audit: type=1130 audit(1757724828.052:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:47.824582 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:53:47.924143 systemd-modules-load[271]: Inserted module 'dm_multipath' Sep 13 00:53:47.927702 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:53:47.999666 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:53:48.052681 systemd[1]: Reached target nss-lookup.target. Sep 13 00:53:48.108010 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:53:48.115059 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:53:48.128102 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:53:48.128827 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:53:48.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:48.131032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:53:48.177607 kernel: audit: type=1130 audit(1757724828.128:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:48.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:48.192878 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:53:48.259544 kernel: audit: type=1130 audit(1757724828.192:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:48.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:48.251007 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:53:48.274540 dracut-cmdline[296]: dracut-dracut-053 Sep 13 00:53:48.274540 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 13 00:53:48.274540 dracut-cmdline[296]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:48.358508 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:53:48.358523 kernel: iscsi: registered transport (tcp) Sep 13 00:53:48.407549 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:53:48.407568 kernel: QLogic iSCSI HBA Driver Sep 13 00:53:48.423866 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:53:48.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:48.433131 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:53:48.488451 kernel: raid6: avx2x4 gen() 48892 MB/s Sep 13 00:53:48.523449 kernel: raid6: avx2x4 xor() 21279 MB/s Sep 13 00:53:48.558485 kernel: raid6: avx2x2 gen() 53671 MB/s Sep 13 00:53:48.593488 kernel: raid6: avx2x2 xor() 32069 MB/s Sep 13 00:53:48.628488 kernel: raid6: avx2x1 gen() 45206 MB/s Sep 13 00:53:48.663491 kernel: raid6: avx2x1 xor() 27907 MB/s Sep 13 00:53:48.697454 kernel: raid6: sse2x4 gen() 21348 MB/s Sep 13 00:53:48.731453 kernel: raid6: sse2x4 xor() 11986 MB/s Sep 13 00:53:48.765453 kernel: raid6: sse2x2 gen() 21628 MB/s Sep 13 00:53:48.799491 kernel: raid6: sse2x2 xor() 13452 MB/s Sep 13 00:53:48.833488 kernel: raid6: sse2x1 gen() 18294 MB/s Sep 13 00:53:48.885372 kernel: raid6: sse2x1 xor() 8937 MB/s Sep 13 00:53:48.885387 kernel: raid6: using algorithm avx2x2 gen() 53671 MB/s Sep 13 00:53:48.885395 kernel: raid6: .... xor() 32069 MB/s, rmw enabled Sep 13 00:53:48.903573 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:53:48.949474 kernel: xor: automatically using best checksumming function avx Sep 13 00:53:49.050425 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:53:49.055219 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:53:49.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:49.063000 audit: BPF prog-id=7 op=LOAD Sep 13 00:53:49.063000 audit: BPF prog-id=8 op=LOAD Sep 13 00:53:49.064255 systemd[1]: Starting systemd-udevd.service... Sep 13 00:53:49.071816 systemd-udevd[477]: Using default interface naming scheme 'v252'. Sep 13 00:53:49.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:49.078653 systemd[1]: Started systemd-udevd.service. Sep 13 00:53:49.120547 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Sep 13 00:53:49.095037 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:53:49.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:49.124475 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:53:49.139697 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:53:49.195291 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:53:49.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:49.221429 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:53:49.223425 kernel: libata version 3.00 loaded. Sep 13 00:53:49.247428 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:53:49.247470 kernel: AES CTR mode by8 optimization enabled Sep 13 00:53:49.247483 kernel: ACPI: bus type USB registered Sep 13 00:53:49.298884 kernel: usbcore: registered new interface driver usbfs Sep 13 00:53:49.298914 kernel: usbcore: registered new interface driver hub Sep 13 00:53:49.316371 kernel: usbcore: registered new device driver usb Sep 13 00:53:49.333523 kernel: ahci 0000:00:17.0: version 3.0 Sep 13 00:53:49.779670 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Sep 13 00:53:49.779754 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 13 00:53:49.779848 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 13 00:53:49.779860 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 13 00:53:49.779872 kernel: scsi host0: ahci Sep 13 00:53:49.779968 kernel: scsi host1: ahci Sep 13 00:53:49.780066 kernel: scsi host2: ahci Sep 13 00:53:49.780183 kernel: igb 0000:04:00.0: added PHC on eth0 Sep 13 00:53:49.780314 kernel: scsi host3: ahci Sep 13 00:53:49.780376 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 00:53:49.780440 kernel: scsi host4: ahci Sep 13 00:53:49.780504 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:72 Sep 13 00:53:49.780558 kernel: scsi host5: ahci Sep 13 00:53:49.780612 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Sep 13 00:53:49.780666 kernel: scsi host6: ahci Sep 13 00:53:49.780724 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 00:53:49.780776 kernel: scsi host7: ahci Sep 13 00:53:49.780830 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 129 Sep 13 00:53:49.780838 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 129 Sep 13 00:53:49.780845 kernel: igb 0000:05:00.0: added PHC on eth1 Sep 13 00:53:49.780905 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 129 Sep 13 00:53:49.780916 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 00:53:49.780972 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 129 Sep 13 00:53:49.780980 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:73 Sep 13 00:53:49.781040 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 129 Sep 13 00:53:49.781048 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Sep 13 00:53:49.781101 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 129 Sep 13 00:53:49.781109 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 00:53:49.781159 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 129 Sep 13 00:53:49.781169 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 129 Sep 13 00:53:49.815506 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Sep 13 00:53:50.399384 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 00:53:50.399570 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 00:53:50.399735 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:50.399744 kernel: port_module: 8 callbacks suppressed Sep 13 00:53:50.399752 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Sep 13 00:53:50.399812 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:50.399821 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 13 00:53:50.399883 kernel: ata8: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:50.399891 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:50.399899 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:50.399907 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 00:53:50.399914 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 00:53:50.399922 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:53:50.399929 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 00:53:50.399937 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 00:53:50.399944 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 00:53:50.399953 kernel: ata1.00: Features: NCQ-prio Sep 13 00:53:50.399961 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 00:53:50.399968 kernel: ata2.00: Features: NCQ-prio Sep 13 00:53:50.399976 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 13 00:53:50.400035 kernel: ata1.00: configured for UDMA/133 Sep 13 00:53:50.400043 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Sep 13 00:53:51.173339 kernel: ata2.00: configured for UDMA/133 Sep 13 00:53:51.173351 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 00:53:51.173434 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 00:53:51.173494 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 00:53:51.173555 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 00:53:51.173608 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Sep 13 00:53:51.173661 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 13 00:53:51.173712 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 13 00:53:51.173760 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 00:53:51.173810 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 13 00:53:51.173859 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 13 00:53:51.173907 kernel: hub 1-0:1.0: USB hub found Sep 13 00:53:51.173970 kernel: hub 1-0:1.0: 16 ports detected Sep 13 00:53:51.174025 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 00:53:51.174032 kernel: hub 2-0:1.0: USB hub found Sep 13 00:53:51.174089 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:53:51.174097 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 00:53:51.174155 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 00:53:51.174212 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 13 00:53:51.174266 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 13 00:53:51.174320 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 13 00:53:51.174375 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:53:51.174435 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:53:51.174443 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:53:51.174452 kernel: GPT:9289727 != 937703087 Sep 13 00:53:51.174458 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:53:51.174465 kernel: GPT:9289727 != 937703087 Sep 13 00:53:51.174471 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:53:51.174477 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:53:51.174484 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:53:51.174490 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 13 00:53:51.174546 kernel: hub 2-0:1.0: 10 ports detected Sep 13 00:53:51.174601 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:53:51.174658 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Sep 13 00:53:51.174715 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 00:53:51.174773 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:53:51.174829 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Sep 13 00:53:51.174880 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 13 00:53:51.174935 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 13 00:53:51.175034 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:53:51.175098 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 13 00:53:51.175152 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 00:53:51.175159 kernel: hub 1-14:1.0: USB hub found Sep 13 00:53:51.175219 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 00:53:51.175226 kernel: hub 1-14:1.0: 4 ports detected Sep 13 00:53:51.175282 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:53:51.175339 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 13 00:53:51.175392 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (658) Sep 13 00:53:51.146656 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:53:51.229536 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth2 Sep 13 00:53:51.229616 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth0 Sep 13 00:53:51.200510 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:53:51.213329 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:53:51.245920 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:53:51.273151 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:53:51.278516 systemd[1]: Starting disk-uuid.service... Sep 13 00:53:51.327518 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:53:51.327530 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:53:51.327538 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:53:51.327591 disk-uuid[690]: Primary Header is updated. Sep 13 00:53:51.327591 disk-uuid[690]: Secondary Entries is updated. Sep 13 00:53:51.327591 disk-uuid[690]: Secondary Header is updated. Sep 13 00:53:51.403519 kernel: GPT:disk_guids don't match. Sep 13 00:53:51.403530 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:53:51.403537 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:53:51.403544 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:53:51.403550 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:53:51.403556 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 13 00:53:51.565432 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:53:51.598645 kernel: usbcore: registered new interface driver usbhid Sep 13 00:53:51.598680 kernel: usbhid: USB HID core driver Sep 13 00:53:51.632474 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 13 00:53:51.759007 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 13 00:53:51.759102 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 13 00:53:51.759111 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 13 00:53:52.365598 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:53:52.384477 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:53:52.384839 disk-uuid[691]: The operation has completed successfully. Sep 13 00:53:52.427932 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:53:52.524763 kernel: audit: type=1130 audit(1757724832.435:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:52.524777 kernel: audit: type=1131 audit(1757724832.435:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:52.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:52.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:52.427976 systemd[1]: Finished disk-uuid.service. Sep 13 00:53:52.554522 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:53:52.436083 systemd[1]: Starting verity-setup.service... Sep 13 00:53:52.586755 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:53:52.596644 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:53:52.610063 systemd[1]: Finished verity-setup.service. Sep 13 00:53:52.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:52.673424 kernel: audit: type=1130 audit(1757724832.624:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:52.748924 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:53:52.761631 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:53:52.761777 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:53:52.749031 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:53:52.855538 kernel: BTRFS info (device sdb6): using free space tree Sep 13 00:53:52.855552 kernel: BTRFS info (device sdb6): has skinny extents Sep 13 00:53:52.855559 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 00:53:52.749431 systemd[1]: Starting ignition-setup.service... Sep 13 00:53:52.843459 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:53:52.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:52.863919 systemd[1]: Finished ignition-setup.service. Sep 13 00:53:52.936550 kernel: audit: type=1130 audit(1757724832.870:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:52.871017 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:53:52.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:52.928728 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:53:53.017735 kernel: audit: type=1130 audit(1757724832.944:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:53.017751 kernel: audit: type=1334 audit(1757724832.994:24): prog-id=9 op=LOAD Sep 13 00:53:52.994000 audit: BPF prog-id=9 op=LOAD Sep 13 00:53:52.995579 systemd[1]: Starting systemd-networkd.service... Sep 13 00:53:53.019924 ignition[845]: Ignition 2.14.0 Sep 13 00:53:53.019929 ignition[845]: Stage: fetch-offline Sep 13 00:53:53.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:53.032644 unknown[845]: fetched base config from "system" Sep 13 00:53:53.161625 kernel: audit: type=1130 audit(1757724833.046:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:53.161643 kernel: audit: type=1130 audit(1757724833.108:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:53.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:53.019955 ignition[845]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:53:53.032648 unknown[845]: fetched user config from "system" Sep 13 00:53:53.019968 ignition[845]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 00:53:53.032902 systemd-networkd[876]: lo: Link UP Sep 13 00:53:53.022579 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:53:53.237911 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 13 00:53:53.238021 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Sep 13 00:53:53.032904 systemd-networkd[876]: lo: Gained carrier Sep 13 00:53:53.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:53.022649 ignition[845]: parsed url from cmdline: "" Sep 13 00:53:53.033249 systemd-networkd[876]: Enumeration completed Sep 13 00:53:53.022652 ignition[845]: no config URL provided Sep 13 00:53:53.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:53.033320 systemd[1]: Started systemd-networkd.service. Sep 13 00:53:53.301557 iscsid[896]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:53:53.301557 iscsid[896]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 00:53:53.301557 iscsid[896]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:53:53.301557 iscsid[896]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:53:53.301557 iscsid[896]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:53:53.301557 iscsid[896]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:53:53.301557 iscsid[896]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:53:53.437609 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 13 00:53:53.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:53.022655 ignition[845]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:53:53.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:53.034111 systemd-networkd[876]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:53:53.022678 ignition[845]: parsing config with SHA512: 62801807163ab627013cdd4d13909f40491cb06c2d9bb80a960c5640c44f72b143d0f6175534c96d349450505b7415b5fbffc2a3929bd13be745e7d16079b4bc Sep 13 00:53:53.047222 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:53:53.032947 ignition[845]: fetch-offline: fetch-offline passed Sep 13 00:53:53.108733 systemd[1]: Reached target network.target. Sep 13 00:53:53.032950 ignition[845]: POST message to Packet Timeline Sep 13 00:53:53.169605 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:53:53.032954 ignition[845]: POST Status error: resource requires networking Sep 13 00:53:53.170091 systemd[1]: Starting ignition-kargs.service... Sep 13 00:53:53.032991 ignition[845]: Ignition finished successfully Sep 13 00:53:53.187995 systemd[1]: Starting iscsiuio.service... Sep 13 00:53:53.174748 ignition[882]: Ignition 2.14.0 Sep 13 00:53:53.215495 systemd-networkd[876]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:53:53.174751 ignition[882]: Stage: kargs Sep 13 00:53:53.228616 systemd[1]: Started iscsiuio.service. Sep 13 00:53:53.174807 ignition[882]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:53:53.253083 systemd[1]: Starting iscsid.service... Sep 13 00:53:53.174817 ignition[882]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 00:53:53.266624 systemd[1]: Started iscsid.service. Sep 13 00:53:53.176117 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:53:53.281041 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:53:53.177680 ignition[882]: kargs: kargs passed Sep 13 00:53:53.294723 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:53:53.177683 ignition[882]: POST message to Packet Timeline Sep 13 00:53:53.309725 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:53:53.177693 ignition[882]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:53:53.328712 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:53:53.179901 ignition[882]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60435->[::1]:53: read: connection refused Sep 13 00:53:53.367394 systemd[1]: Reached target remote-fs.target. Sep 13 00:53:53.380203 ignition[882]: GET https://metadata.packet.net/metadata: attempt #2 Sep 13 00:53:53.408079 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:53:53.380500 ignition[882]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35236->[::1]:53: read: connection refused Sep 13 00:53:53.414398 systemd-networkd[876]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:53:53.420687 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:53:53.443061 systemd-networkd[876]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:53:53.472116 systemd-networkd[876]: enp2s0f1np1: Link UP Sep 13 00:53:53.472307 systemd-networkd[876]: enp2s0f1np1: Gained carrier Sep 13 00:53:53.488972 systemd-networkd[876]: enp2s0f0np0: Link UP Sep 13 00:53:53.781026 ignition[882]: GET https://metadata.packet.net/metadata: attempt #3 Sep 13 00:53:53.489388 systemd-networkd[876]: eno2: Link UP Sep 13 00:53:53.782059 ignition[882]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58237->[::1]:53: read: connection refused Sep 13 00:53:53.489813 systemd-networkd[876]: eno1: Link UP Sep 13 00:53:54.258079 systemd-networkd[876]: enp2s0f0np0: Gained carrier Sep 13 00:53:54.266663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Sep 13 00:53:54.308640 systemd-networkd[876]: enp2s0f0np0: DHCPv4 address 147.75.203.133/31, gateway 147.75.203.132 acquired from 145.40.83.140 Sep 13 00:53:54.582595 ignition[882]: GET https://metadata.packet.net/metadata: attempt #4 Sep 13 00:53:54.583651 ignition[882]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43769->[::1]:53: read: connection refused Sep 13 00:53:55.003656 systemd-networkd[876]: enp2s0f1np1: Gained IPv6LL Sep 13 00:53:56.155643 systemd-networkd[876]: enp2s0f0np0: Gained IPv6LL Sep 13 00:53:56.184465 ignition[882]: GET https://metadata.packet.net/metadata: attempt #5 Sep 13 00:53:56.185663 ignition[882]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37830->[::1]:53: read: connection refused Sep 13 00:53:59.388034 ignition[882]: GET https://metadata.packet.net/metadata: attempt #6 Sep 13 00:54:00.494396 ignition[882]: GET result: OK Sep 13 00:54:00.935795 ignition[882]: Ignition finished successfully Sep 13 00:54:00.939657 systemd[1]: Finished ignition-kargs.service. Sep 13 00:54:01.025422 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:54:01.025442 kernel: audit: type=1130 audit(1757724840.950:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:00.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:00.959334 ignition[913]: Ignition 2.14.0 Sep 13 00:54:00.952547 systemd[1]: Starting ignition-disks.service... Sep 13 00:54:00.959338 ignition[913]: Stage: disks Sep 13 00:54:00.959392 ignition[913]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:54:00.959401 ignition[913]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 00:54:00.960936 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:54:00.962340 ignition[913]: disks: disks passed Sep 13 00:54:00.962344 ignition[913]: POST message to Packet Timeline Sep 13 00:54:00.962353 ignition[913]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:54:01.985963 ignition[913]: GET result: OK Sep 13 00:54:02.421397 ignition[913]: Ignition finished successfully Sep 13 00:54:02.424136 systemd[1]: Finished ignition-disks.service. Sep 13 00:54:02.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.436937 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:54:02.514615 kernel: audit: type=1130 audit(1757724842.436:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.500615 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:54:02.500651 systemd[1]: Reached target local-fs.target. Sep 13 00:54:02.514662 systemd[1]: Reached target sysinit.target. Sep 13 00:54:02.540551 systemd[1]: Reached target basic.target. Sep 13 00:54:02.541113 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:54:02.585125 systemd-fsck[930]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:54:02.596875 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:54:02.695942 kernel: audit: type=1130 audit(1757724842.605:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.696032 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:54:02.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.611461 systemd[1]: Mounting sysroot.mount... Sep 13 00:54:02.703059 systemd[1]: Mounted sysroot.mount. Sep 13 00:54:02.716667 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:54:02.724278 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:54:02.749254 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 00:54:02.758017 systemd[1]: Starting flatcar-static-network.service... Sep 13 00:54:02.765577 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:54:02.765604 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:54:02.790263 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:54:02.814113 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:54:02.826154 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:54:02.965631 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (946) Sep 13 00:54:02.965648 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:54:02.965659 kernel: BTRFS info (device sdb6): using free space tree Sep 13 00:54:02.965666 kernel: BTRFS info (device sdb6): has skinny extents Sep 13 00:54:02.965674 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 00:54:02.965684 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:54:03.028531 kernel: audit: type=1130 audit(1757724842.973:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:03.028643 coreos-metadata[938]: Sep 13 00:54:02.903 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:54:02.896016 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:54:03.049762 coreos-metadata[937]: Sep 13 00:54:02.903 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:54:03.074545 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:54:02.974701 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:54:03.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:03.129571 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:54:03.165611 kernel: audit: type=1130 audit(1757724843.100:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:03.037022 systemd[1]: Starting ignition-mount.service... Sep 13 00:54:03.172605 initrd-setup-root[977]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:54:03.062991 systemd[1]: Starting sysroot-boot.service... Sep 13 00:54:03.189603 ignition[1019]: INFO : Ignition 2.14.0 Sep 13 00:54:03.189603 ignition[1019]: INFO : Stage: mount Sep 13 00:54:03.189603 ignition[1019]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:54:03.189603 ignition[1019]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 00:54:03.189603 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:54:03.189603 ignition[1019]: INFO : mount: mount passed Sep 13 00:54:03.189603 ignition[1019]: INFO : POST message to Packet Timeline Sep 13 00:54:03.189603 ignition[1019]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:54:03.083154 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 00:54:03.083195 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 00:54:03.083790 systemd[1]: Finished sysroot-boot.service. Sep 13 00:54:03.996798 coreos-metadata[937]: Sep 13 00:54:03.996 INFO Fetch successful Sep 13 00:54:04.029328 coreos-metadata[937]: Sep 13 00:54:04.029 INFO wrote hostname ci-3510.3.8-n-d04f0c45dd to /sysroot/etc/hostname Sep 13 00:54:04.029705 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 00:54:04.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.108582 kernel: audit: type=1130 audit(1757724844.051:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.125269 ignition[1019]: INFO : GET result: OK Sep 13 00:54:04.266671 coreos-metadata[938]: Sep 13 00:54:04.266 INFO Fetch successful Sep 13 00:54:04.302425 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 13 00:54:04.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.302476 systemd[1]: Finished flatcar-static-network.service. Sep 13 00:54:04.431606 kernel: audit: type=1130 audit(1757724844.310:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.431618 kernel: audit: type=1131 audit(1757724844.310:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.572319 ignition[1019]: INFO : Ignition finished successfully Sep 13 00:54:04.574547 systemd[1]: Finished ignition-mount.service. Sep 13 00:54:04.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.590351 systemd[1]: Starting ignition-files.service... Sep 13 00:54:04.660520 kernel: audit: type=1130 audit(1757724844.588:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.655366 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:54:04.718005 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1033) Sep 13 00:54:04.718023 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:54:04.718032 kernel: BTRFS info (device sdb6): using free space tree Sep 13 00:54:04.741340 kernel: BTRFS info (device sdb6): has skinny extents Sep 13 00:54:04.789473 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 00:54:04.790941 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:54:04.807586 ignition[1052]: INFO : Ignition 2.14.0 Sep 13 00:54:04.807586 ignition[1052]: INFO : Stage: files Sep 13 00:54:04.807586 ignition[1052]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:54:04.807586 ignition[1052]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 00:54:04.807586 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:54:04.810752 unknown[1052]: wrote ssh authorized keys file for user: core Sep 13 00:54:04.870520 ignition[1052]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:54:04.870520 ignition[1052]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:54:04.870520 ignition[1052]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:54:04.870520 ignition[1052]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:54:04.870520 ignition[1052]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:54:04.870520 ignition[1052]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:54:04.870520 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:54:04.870520 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:54:04.870520 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:54:04.870520 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:54:04.870520 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 13 00:54:05.013559 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:54:04.983539 systemd[1]: mnt-oem3605279660.mount: Deactivated successfully. Sep 13 00:54:05.265643 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3605279660" Sep 13 00:54:05.265643 ignition[1052]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3605279660": device or resource busy Sep 13 00:54:05.265643 ignition[1052]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3605279660", trying btrfs: device or resource busy Sep 13 00:54:05.265643 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3605279660" Sep 13 00:54:05.265643 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3605279660" Sep 13 00:54:05.265643 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3605279660" Sep 13 00:54:05.265643 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3605279660" Sep 13 00:54:05.265643 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 13 00:54:05.265643 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:54:05.265643 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:54:05.473406 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Sep 13 00:54:05.785226 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:54:05.785226 ignition[1052]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:54:05.785226 ignition[1052]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:54:05.785226 ignition[1052]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(12): [started] processing unit "containerd.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(12): [finished] processing unit "containerd.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(17): [started] setting preset to enabled for "packet-phone-home.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(17): [finished] setting preset to enabled for "packet-phone-home.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: createResultFile: createFiles: op(19): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: createResultFile: createFiles: op(19): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:54:05.840626 ignition[1052]: INFO : files: files passed Sep 13 00:54:05.840626 ignition[1052]: INFO : POST message to Packet Timeline Sep 13 00:54:05.840626 ignition[1052]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:54:06.890942 ignition[1052]: INFO : GET result: OK Sep 13 00:54:07.672241 ignition[1052]: INFO : Ignition finished successfully Sep 13 00:54:07.683131 systemd[1]: Finished ignition-files.service. Sep 13 00:54:07.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.756462 kernel: audit: type=1130 audit(1757724847.699:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.705385 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:54:07.764661 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:54:07.798568 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:54:07.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.765118 systemd[1]: Starting ignition-quench.service... Sep 13 00:54:07.987773 kernel: audit: type=1130 audit(1757724847.808:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.987789 kernel: audit: type=1130 audit(1757724847.874:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.987798 kernel: audit: type=1131 audit(1757724847.874:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.781817 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:54:07.808688 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:54:07.808747 systemd[1]: Finished ignition-quench.service. Sep 13 00:54:08.141684 kernel: audit: type=1130 audit(1757724848.027:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.141696 kernel: audit: type=1131 audit(1757724848.027:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.874662 systemd[1]: Reached target ignition-complete.target. Sep 13 00:54:07.995931 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:54:08.015670 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:54:08.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.015712 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:54:08.258633 kernel: audit: type=1130 audit(1757724848.186:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.027684 systemd[1]: Reached target initrd-fs.target. Sep 13 00:54:08.149610 systemd[1]: Reached target initrd.target. Sep 13 00:54:08.149701 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:54:08.150077 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:54:08.170785 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:54:08.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.186973 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:54:08.397614 kernel: audit: type=1131 audit(1757724848.321:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.254477 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:54:08.266687 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:54:08.281668 systemd[1]: Stopped target timers.target. Sep 13 00:54:08.305620 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:54:08.305732 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:54:08.321794 systemd[1]: Stopped target initrd.target. Sep 13 00:54:08.390653 systemd[1]: Stopped target basic.target. Sep 13 00:54:08.397758 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:54:08.424630 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:54:08.446631 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:54:08.461688 systemd[1]: Stopped target remote-fs.target. Sep 13 00:54:08.476791 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:54:08.491920 systemd[1]: Stopped target sysinit.target. Sep 13 00:54:08.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.507925 systemd[1]: Stopped target local-fs.target. Sep 13 00:54:08.659616 kernel: audit: type=1131 audit(1757724848.573:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.523905 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:54:08.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.540905 systemd[1]: Stopped target swap.target. Sep 13 00:54:08.736646 kernel: audit: type=1131 audit(1757724848.659:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.555804 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:54:08.556110 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:54:08.574075 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:54:08.652618 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:54:08.652722 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:54:08.659773 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:54:08.659830 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:54:08.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.729803 systemd[1]: Stopped target paths.target. Sep 13 00:54:08.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.743656 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:54:08.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.747628 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:54:08.884638 ignition[1102]: INFO : Ignition 2.14.0 Sep 13 00:54:08.884638 ignition[1102]: INFO : Stage: umount Sep 13 00:54:08.884638 ignition[1102]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:54:08.884638 ignition[1102]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 00:54:08.884638 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:54:08.884638 ignition[1102]: INFO : umount: umount passed Sep 13 00:54:08.884638 ignition[1102]: INFO : POST message to Packet Timeline Sep 13 00:54:08.884638 ignition[1102]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:54:08.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.765639 systemd[1]: Stopped target slices.target. Sep 13 00:54:09.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.773717 systemd[1]: Stopped target sockets.target. Sep 13 00:54:08.787716 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:54:09.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:09.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.787789 systemd[1]: Closed iscsid.socket. Sep 13 00:54:08.806730 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:54:08.806881 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:54:08.826974 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:54:08.827268 systemd[1]: Stopped ignition-files.service. Sep 13 00:54:08.843972 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:54:08.844279 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 00:54:08.861690 systemd[1]: Stopping ignition-mount.service... Sep 13 00:54:08.876724 systemd[1]: Stopping iscsiuio.service... Sep 13 00:54:08.892562 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:54:08.892656 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:54:08.912305 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:54:08.930599 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:54:08.930819 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:54:08.957989 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:54:08.958305 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:54:08.975270 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:54:08.975587 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:54:08.975631 systemd[1]: Stopped iscsiuio.service. Sep 13 00:54:08.997815 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:54:08.997872 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:54:09.014959 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:54:09.015049 systemd[1]: Closed iscsiuio.socket. Sep 13 00:54:09.028975 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:54:09.029101 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:54:09.952325 ignition[1102]: INFO : GET result: OK Sep 13 00:54:10.432215 ignition[1102]: INFO : Ignition finished successfully Sep 13 00:54:10.434644 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:54:10.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.434858 systemd[1]: Stopped ignition-mount.service. Sep 13 00:54:10.449888 systemd[1]: Stopped target network.target. Sep 13 00:54:10.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.465617 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:54:10.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.465749 systemd[1]: Stopped ignition-disks.service. Sep 13 00:54:10.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.480729 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:54:10.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.480851 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:54:10.495715 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:54:10.495837 systemd[1]: Stopped ignition-setup.service. Sep 13 00:54:10.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.512715 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:54:10.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.590000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:54:10.512835 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:54:10.527951 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:54:10.533548 systemd-networkd[876]: enp2s0f0np0: DHCPv6 lease lost Sep 13 00:54:10.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.543839 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:54:10.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.545612 systemd-networkd[876]: enp2s0f1np1: DHCPv6 lease lost Sep 13 00:54:10.662000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:54:10.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.558171 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:54:10.558399 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:54:10.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.575013 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:54:10.575280 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:54:10.590862 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:54:10.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.590970 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:54:10.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.610068 systemd[1]: Stopping network-cleanup.service... Sep 13 00:54:10.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.622622 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:54:10.622777 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:54:10.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.638801 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:54:10.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.638928 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:54:10.654955 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:54:10.655072 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:54:10.670936 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:54:10.689015 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:54:10.690338 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:54:10.690654 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:54:10.703911 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:54:10.704047 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:54:10.716724 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:54:10.716815 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:54:10.733603 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:54:10.733721 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:54:10.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:10.748767 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:54:10.748885 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:54:10.764848 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:54:10.996000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:54:10.996000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:54:10.996000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:54:10.998000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:54:10.998000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:54:10.765002 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:54:10.782351 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:54:10.796575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:54:10.796707 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:54:11.064447 systemd-journald[270]: Received SIGTERM from PID 1 (n/a). Sep 13 00:54:11.064489 iscsid[896]: iscsid shutting down. Sep 13 00:54:10.815680 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:54:10.815741 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:54:10.939083 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:54:10.939315 systemd[1]: Stopped network-cleanup.service. Sep 13 00:54:10.952870 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:54:10.969205 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:54:10.988069 systemd[1]: Switching root. Sep 13 00:54:11.064907 systemd-journald[270]: Journal stopped Sep 13 00:54:14.849521 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:54:14.849538 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:54:14.849545 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:54:14.849551 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:54:14.849557 kernel: SELinux: policy capability open_perms=1 Sep 13 00:54:14.849562 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:54:14.849569 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:54:14.849575 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:54:14.849581 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:54:14.849586 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:54:14.849591 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:54:14.849598 systemd[1]: Successfully loaded SELinux policy in 298.750ms. Sep 13 00:54:14.849605 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.336ms. Sep 13 00:54:14.849612 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:54:14.849620 systemd[1]: Detected architecture x86-64. Sep 13 00:54:14.849626 systemd[1]: Detected first boot. Sep 13 00:54:14.849632 systemd[1]: Hostname set to . Sep 13 00:54:14.849639 systemd[1]: Initializing machine ID from random generator. Sep 13 00:54:14.849645 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:54:14.849652 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:54:14.849658 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:14.849665 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:14.849672 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:14.849679 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:54:14.849685 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Sep 13 00:54:14.849692 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:54:14.849700 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:54:14.849707 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:54:14.849713 systemd[1]: Created slice system-getty.slice. Sep 13 00:54:14.849719 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:54:14.849726 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:54:14.849732 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:54:14.849738 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:54:14.849744 systemd[1]: Created slice user.slice. Sep 13 00:54:14.849752 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:54:14.849758 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:54:14.849764 systemd[1]: Set up automount boot.automount. Sep 13 00:54:14.849771 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:54:14.849777 systemd[1]: Reached target integritysetup.target. Sep 13 00:54:14.849784 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:54:14.849792 systemd[1]: Reached target remote-fs.target. Sep 13 00:54:14.849798 systemd[1]: Reached target slices.target. Sep 13 00:54:14.849805 systemd[1]: Reached target swap.target. Sep 13 00:54:14.849812 systemd[1]: Reached target torcx.target. Sep 13 00:54:14.849819 systemd[1]: Reached target veritysetup.target. Sep 13 00:54:14.849826 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:54:14.849833 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:54:14.849839 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:54:14.849846 kernel: kauditd_printk_skb: 48 callbacks suppressed Sep 13 00:54:14.849852 kernel: audit: type=1400 audit(1757724854.104:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:54:14.849860 kernel: audit: type=1335 audit(1757724854.104:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:54:14.849866 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:54:14.849873 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:54:14.849880 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:54:14.849886 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:54:14.849894 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:54:14.849901 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:54:14.849907 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:54:14.849914 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:54:14.849921 systemd[1]: Mounting media.mount... Sep 13 00:54:14.849928 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:14.849934 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:54:14.849941 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:54:14.849949 systemd[1]: Mounting tmp.mount... Sep 13 00:54:14.849955 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:54:14.849962 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:54:14.849969 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:54:14.849976 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:54:14.849983 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:54:14.849990 systemd[1]: Starting modprobe@drm.service... Sep 13 00:54:14.849996 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:54:14.850003 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:54:14.850010 kernel: fuse: init (API version 7.34) Sep 13 00:54:14.850017 systemd[1]: Starting modprobe@loop.service... Sep 13 00:54:14.850023 kernel: loop: module loaded Sep 13 00:54:14.850030 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:54:14.850037 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:54:14.850043 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:54:14.850050 systemd[1]: Starting systemd-journald.service... Sep 13 00:54:14.850057 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:54:14.850064 kernel: audit: type=1305 audit(1757724854.847:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:54:14.850072 systemd-journald[1299]: Journal started Sep 13 00:54:14.850098 systemd-journald[1299]: Runtime Journal (/run/log/journal/344f20050fc64660ba1fd962c27a171f) is 8.0M, max 639.3M, 631.3M free. Sep 13 00:54:14.104000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:54:14.104000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 13 00:54:14.847000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:54:14.847000 audit[1299]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffce9a4a580 a2=4000 a3=7ffce9a4a61c items=0 ppid=1 pid=1299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:14.847000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:54:14.897489 kernel: audit: type=1300 audit(1757724854.847:93): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffce9a4a580 a2=4000 a3=7ffce9a4a61c items=0 ppid=1 pid=1299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:14.897525 kernel: audit: type=1327 audit(1757724854.847:93): proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:54:15.011594 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:54:15.038581 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:54:15.064468 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:54:15.107468 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:15.126452 systemd[1]: Started systemd-journald.service. Sep 13 00:54:15.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.135172 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:54:15.183598 kernel: audit: type=1130 audit(1757724855.134:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.189665 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:54:15.196664 systemd[1]: Mounted media.mount. Sep 13 00:54:15.203650 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:54:15.211648 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:54:15.219617 systemd[1]: Mounted tmp.mount. Sep 13 00:54:15.226733 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:54:15.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.234750 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:54:15.282583 kernel: audit: type=1130 audit(1757724855.234:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.290739 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:54:15.290816 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:54:15.339455 kernel: audit: type=1130 audit(1757724855.290:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.348094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:54:15.348389 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:54:15.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.398471 kernel: audit: type=1130 audit(1757724855.347:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.398493 kernel: audit: type=1131 audit(1757724855.347:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.457732 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:54:15.457809 systemd[1]: Finished modprobe@drm.service. Sep 13 00:54:15.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.466729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:54:15.466803 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:54:15.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.475728 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:54:15.475803 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:54:15.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.484725 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:54:15.484807 systemd[1]: Finished modprobe@loop.service. Sep 13 00:54:15.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.493800 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:54:15.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.502782 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:54:15.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.511764 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:54:15.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.519824 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:54:15.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.528994 systemd[1]: Reached target network-pre.target. Sep 13 00:54:15.539635 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:54:15.548155 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:54:15.555591 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:54:15.556683 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:54:15.564101 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:54:15.568066 systemd-journald[1299]: Time spent on flushing to /var/log/journal/344f20050fc64660ba1fd962c27a171f is 14.546ms for 1551 entries. Sep 13 00:54:15.568066 systemd-journald[1299]: System Journal (/var/log/journal/344f20050fc64660ba1fd962c27a171f) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:54:15.608742 systemd-journald[1299]: Received client request to flush runtime journal. Sep 13 00:54:15.580513 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:54:15.581028 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:54:15.598546 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:54:15.599114 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:54:15.606117 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:54:15.613112 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:54:15.620744 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:54:15.628606 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:54:15.636683 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:54:15.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.644699 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:54:15.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.653705 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:54:15.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.661682 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:54:15.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.670726 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:54:15.679240 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:54:15.687802 udevadm[1326]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:54:15.698877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:54:15.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.863781 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:54:15.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.874036 systemd[1]: Starting systemd-udevd.service... Sep 13 00:54:15.889017 systemd-udevd[1332]: Using default interface naming scheme 'v252'. Sep 13 00:54:15.908988 systemd[1]: Started systemd-udevd.service. Sep 13 00:54:15.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:15.920984 systemd[1]: Found device dev-ttyS1.device. Sep 13 00:54:15.954428 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:54:15.954482 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 13 00:54:15.972282 systemd[1]: Starting systemd-networkd.service... Sep 13 00:54:15.983811 kernel: ACPI: button: Sleep Button [SLPB] Sep 13 00:54:16.021903 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:54:16.021981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:54:16.034298 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:54:16.049501 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:54:15.981000 audit[1392]: AVC avc: denied { confidentiality } for pid=1392 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:54:16.072468 kernel: IPMI message handler: version 39.2 Sep 13 00:54:15.981000 audit[1392]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56226da1c5e0 a1=4d9cc a2=7fecfb656bc5 a3=5 items=42 ppid=1332 pid=1392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:15.981000 audit: CWD cwd="/" Sep 13 00:54:15.981000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=1 name=(null) inode=20722 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=2 name=(null) inode=20722 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=3 name=(null) inode=20723 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=4 name=(null) inode=20722 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=5 name=(null) inode=20724 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=6 name=(null) inode=20722 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=7 name=(null) inode=20725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=8 name=(null) inode=20725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=9 name=(null) inode=20726 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=10 name=(null) inode=20725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=11 name=(null) inode=20727 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=12 name=(null) inode=20725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=13 name=(null) inode=20728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=14 name=(null) inode=20725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=15 name=(null) inode=20729 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=16 name=(null) inode=20725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=17 name=(null) inode=20730 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=18 name=(null) inode=20722 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:16.101422 kernel: ipmi device interface Sep 13 00:54:16.101451 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 13 00:54:16.167580 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 13 00:54:16.167783 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 13 00:54:15.981000 audit: PATH item=19 name=(null) inode=20731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=20 name=(null) inode=20731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=21 name=(null) inode=20732 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=22 name=(null) inode=20731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=23 name=(null) inode=20733 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=24 name=(null) inode=20731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=25 name=(null) inode=20734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=26 name=(null) inode=20731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=27 name=(null) inode=20735 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=28 name=(null) inode=20731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=29 name=(null) inode=20736 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=30 name=(null) inode=20722 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=31 name=(null) inode=20737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=32 name=(null) inode=20737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=33 name=(null) inode=20738 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=34 name=(null) inode=20737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=35 name=(null) inode=20739 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=36 name=(null) inode=20737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=37 name=(null) inode=20740 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=38 name=(null) inode=20737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=39 name=(null) inode=20741 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=40 name=(null) inode=20737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PATH item=41 name=(null) inode=20742 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:15.981000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:54:16.169920 systemd[1]: Started systemd-userdbd.service. Sep 13 00:54:16.210158 kernel: ipmi_si: IPMI System Interface driver Sep 13 00:54:16.210189 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 13 00:54:16.210262 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 13 00:54:16.210274 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 13 00:54:16.210291 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 13 00:54:16.416901 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 13 00:54:16.417027 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 13 00:54:16.417142 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 13 00:54:16.417256 kernel: iTCO_vendor_support: vendor-support=0 Sep 13 00:54:16.417274 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 13 00:54:16.417365 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 13 00:54:16.417397 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 13 00:54:16.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.484427 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Sep 13 00:54:16.484540 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 13 00:54:16.562036 systemd-networkd[1410]: bond0: netdev ready Sep 13 00:54:16.565356 systemd-networkd[1410]: lo: Link UP Sep 13 00:54:16.565361 systemd-networkd[1410]: lo: Gained carrier Sep 13 00:54:16.565976 systemd-networkd[1410]: Enumeration completed Sep 13 00:54:16.566052 systemd[1]: Started systemd-networkd.service. Sep 13 00:54:16.566415 systemd-networkd[1410]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 13 00:54:16.571751 systemd-networkd[1410]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:8f:96:a7.network. Sep 13 00:54:16.582006 kernel: intel_rapl_common: Found RAPL domain package Sep 13 00:54:16.582031 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Sep 13 00:54:16.582132 kernel: intel_rapl_common: Found RAPL domain core Sep 13 00:54:16.602422 kernel: intel_rapl_common: Found RAPL domain uncore Sep 13 00:54:16.602443 kernel: intel_rapl_common: Found RAPL domain dram Sep 13 00:54:16.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.711475 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 13 00:54:16.732426 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 13 00:54:16.735706 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:54:16.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.744346 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:54:16.760890 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:54:16.799283 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:54:16.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.807870 systemd[1]: Reached target cryptsetup.target. Sep 13 00:54:16.819361 systemd[1]: Starting lvm2-activation.service... Sep 13 00:54:16.828837 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:54:16.882431 systemd[1]: Finished lvm2-activation.service. Sep 13 00:54:16.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:16.891824 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:54:16.901593 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:54:16.901657 systemd[1]: Reached target local-fs.target. Sep 13 00:54:16.910605 systemd[1]: Reached target machines.target. Sep 13 00:54:16.922862 systemd[1]: Starting ldconfig.service... Sep 13 00:54:16.931657 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:54:16.931789 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:16.935200 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:54:16.945906 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:54:16.958476 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:54:16.970003 systemd[1]: Starting systemd-sysext.service... Sep 13 00:54:16.971103 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1443 (bootctl) Sep 13 00:54:16.974304 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:54:16.987240 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:54:16.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.036896 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:54:17.046143 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:54:17.046773 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:54:17.101538 kernel: loop0: detected capacity change from 0 to 221472 Sep 13 00:54:17.129425 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 13 00:54:17.131934 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:54:17.132315 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:54:17.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.153449 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:54:17.153484 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Sep 13 00:54:17.173885 systemd-fsck[1458]: fsck.fat 4.2 (2021-01-31) Sep 13 00:54:17.173885 systemd-fsck[1458]: /dev/sdb1: 790 files, 120761/258078 clusters Sep 13 00:54:17.174672 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:54:17.175922 systemd-networkd[1410]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:8f:96:a6.network. Sep 13 00:54:17.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.186390 systemd[1]: Mounting boot.mount... Sep 13 00:54:17.197989 systemd[1]: Mounted boot.mount. Sep 13 00:54:17.218423 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 13 00:54:17.240424 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:54:17.243281 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:54:17.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.255957 (sd-sysext)[1465]: Using extensions 'kubernetes'. Sep 13 00:54:17.256139 (sd-sysext)[1465]: Merged extensions into '/usr'. Sep 13 00:54:17.265786 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:17.266668 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:54:17.273774 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:54:17.274522 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:54:17.283195 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:54:17.294482 systemd[1]: Starting modprobe@loop.service... Sep 13 00:54:17.301755 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:54:17.302133 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:17.302518 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:17.313096 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:54:17.335433 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 13 00:54:17.335718 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 13 00:54:17.357424 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Sep 13 00:54:17.357457 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Sep 13 00:54:17.390687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:54:17.390775 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:54:17.397064 systemd-networkd[1410]: bond0: Link UP Sep 13 00:54:17.397276 systemd-networkd[1410]: enp2s0f1np1: Link UP Sep 13 00:54:17.397426 systemd-networkd[1410]: enp2s0f1np1: Gained carrier Sep 13 00:54:17.398414 systemd-networkd[1410]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:8f:96:a6.network. Sep 13 00:54:17.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.403880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:54:17.403959 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:54:17.411227 ldconfig[1442]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:54:17.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.412702 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:54:17.412782 systemd[1]: Finished modprobe@loop.service. Sep 13 00:54:17.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.420794 systemd[1]: Finished ldconfig.service. Sep 13 00:54:17.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.434667 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:54:17.434729 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:54:17.435250 systemd[1]: Finished systemd-sysext.service. Sep 13 00:54:17.447509 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.462276 systemd[1]: Starting ensure-sysext.service... Sep 13 00:54:17.468477 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.482106 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:54:17.488480 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.492931 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:54:17.494376 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:54:17.495496 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:54:17.503926 systemd[1]: Reloading. Sep 13 00:54:17.508453 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.522676 /usr/lib/systemd/system-generators/torcx-generator[1502]: time="2025-09-13T00:54:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:54:17.522701 /usr/lib/systemd/system-generators/torcx-generator[1502]: time="2025-09-13T00:54:17Z" level=info msg="torcx already run" Sep 13 00:54:17.527427 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.546425 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.565424 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.583477 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.589240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:17.589247 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:17.600223 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:17.601463 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.618425 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.635422 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.645077 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:54:17.652472 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:17.669078 systemd[1]: Starting audit-rules.service... Sep 13 00:54:17.669422 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.684110 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:54:17.684000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:54:17.684000 audit[1583]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeed897e10 a2=420 a3=0 items=0 ppid=1568 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:17.684000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:54:17.685499 augenrules[1583]: No rules Sep 13 00:54:17.686425 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.702210 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:54:17.703483 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.719351 systemd[1]: Starting systemd-resolved.service... Sep 13 00:54:17.721484 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.735273 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:54:17.739512 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.739760 systemd-networkd[1410]: enp2s0f0np0: Link UP Sep 13 00:54:17.739990 systemd-networkd[1410]: bond0: Gained carrier Sep 13 00:54:17.740101 systemd-networkd[1410]: enp2s0f0np0: Gained carrier Sep 13 00:54:17.753067 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:54:17.756463 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Sep 13 00:54:17.756487 kernel: bond0: (slave enp2s0f1np1): link status definitely down, disabling slave Sep 13 00:54:17.756500 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 13 00:54:17.787834 systemd[1]: Finished audit-rules.service. Sep 13 00:54:17.804997 systemd-networkd[1410]: enp2s0f1np1: Link DOWN Sep 13 00:54:17.805001 systemd-networkd[1410]: enp2s0f1np1: Lost carrier Sep 13 00:54:17.805467 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 13 00:54:17.805496 kernel: bond0: active interface up! Sep 13 00:54:17.823667 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:54:17.831656 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:54:17.844722 systemd[1]: Starting systemd-update-done.service... Sep 13 00:54:17.851492 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:54:17.852027 systemd[1]: Finished systemd-update-done.service. Sep 13 00:54:17.861957 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:54:17.862724 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:54:17.870069 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:54:17.877131 systemd[1]: Starting modprobe@loop.service... Sep 13 00:54:17.878838 systemd-resolved[1593]: Positive Trust Anchors: Sep 13 00:54:17.878844 systemd-resolved[1593]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:54:17.878864 systemd-resolved[1593]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:54:18.461516 systemd-timesyncd[1594]: Contacted time server 64.142.54.12:123 (0.flatcar.pool.ntp.org). Sep 13 00:54:18.461546 systemd-timesyncd[1594]: Initial clock synchronization to Sat 2025-09-13 00:54:18.461470 UTC. Sep 13 00:54:18.461809 systemd-resolved[1593]: Using system hostname 'ci-3510.3.8-n-d04f0c45dd'. Sep 13 00:54:18.462498 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:54:18.462576 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:18.462639 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:54:18.463155 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:54:18.471980 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:54:18.480647 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:54:18.480725 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:54:18.488697 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:54:18.488785 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:54:18.497700 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:54:18.497803 systemd[1]: Finished modprobe@loop.service. Sep 13 00:54:18.509327 systemd[1]: Reached target time-set.target. Sep 13 00:54:18.517739 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:54:18.518948 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:54:18.526760 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:54:18.539773 systemd[1]: Starting modprobe@loop.service... Sep 13 00:54:18.542409 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 13 00:54:18.546695 systemd-networkd[1410]: enp2s0f1np1: Link UP Sep 13 00:54:18.547171 systemd-networkd[1410]: enp2s0f1np1: Gained carrier Sep 13 00:54:18.548522 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:54:18.548694 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:18.548842 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:54:18.549956 systemd[1]: Started systemd-resolved.service. Sep 13 00:54:18.558945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:54:18.559139 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:54:18.568081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:54:18.568300 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:54:18.577424 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:54:18.577775 systemd[1]: Finished modprobe@loop.service. Sep 13 00:54:18.586450 systemd[1]: Reached target network.target. Sep 13 00:54:18.601440 systemd[1]: Reached target nss-lookup.target. Sep 13 00:54:18.606421 kernel: bond0: (slave enp2s0f1np1): link status up, enabling it in 200 ms Sep 13 00:54:18.606452 kernel: bond0: (slave enp2s0f1np1): invalid new link 3 on slave Sep 13 00:54:18.628438 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:54:18.628510 systemd[1]: Reached target sysinit.target. Sep 13 00:54:18.636534 systemd[1]: Started motdgen.path. Sep 13 00:54:18.643541 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:54:18.653611 systemd[1]: Started logrotate.timer. Sep 13 00:54:18.660557 systemd[1]: Started mdadm.timer. Sep 13 00:54:18.667516 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:54:18.675473 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:54:18.675590 systemd[1]: Reached target paths.target. Sep 13 00:54:18.682524 systemd[1]: Reached target timers.target. Sep 13 00:54:18.690020 systemd[1]: Listening on dbus.socket. Sep 13 00:54:18.699192 systemd[1]: Starting docker.socket... Sep 13 00:54:18.709302 systemd[1]: Listening on sshd.socket. Sep 13 00:54:18.716940 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:18.717273 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:54:18.726630 systemd[1]: Listening on docker.socket. Sep 13 00:54:18.738069 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:54:18.738409 systemd[1]: Reached target sockets.target. Sep 13 00:54:18.746728 systemd[1]: Reached target basic.target. Sep 13 00:54:18.753919 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:54:18.754005 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:18.754257 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:54:18.754554 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:54:18.757736 systemd[1]: Starting containerd.service... Sep 13 00:54:18.767175 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:54:18.778600 systemd[1]: Starting coreos-metadata.service... Sep 13 00:54:18.788624 systemd[1]: Starting dbus.service... Sep 13 00:54:18.798088 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:54:18.807775 systemd[1]: Starting extend-filesystems.service... Sep 13 00:54:18.809022 jq[1626]: false Sep 13 00:54:18.814481 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:54:18.816093 systemd[1]: Starting modprobe@drm.service... Sep 13 00:54:18.822537 extend-filesystems[1627]: Found loop1 Sep 13 00:54:18.851479 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 13 00:54:18.851503 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Sep 13 00:54:18.851530 coreos-metadata[1622]: Sep 13 00:54:18.844 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:54:18.851661 coreos-metadata[1619]: Sep 13 00:54:18.842 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:54:18.832706 systemd[1]: Starting motdgen.service... Sep 13 00:54:18.823537 dbus-daemon[1625]: [system] SELinux support is enabled Sep 13 00:54:18.851934 extend-filesystems[1627]: Found sda Sep 13 00:54:18.851934 extend-filesystems[1627]: Found sdb Sep 13 00:54:18.851934 extend-filesystems[1627]: Found sdb1 Sep 13 00:54:18.851934 extend-filesystems[1627]: Found sdb2 Sep 13 00:54:18.851934 extend-filesystems[1627]: Found sdb3 Sep 13 00:54:18.851934 extend-filesystems[1627]: Found usr Sep 13 00:54:18.851934 extend-filesystems[1627]: Found sdb4 Sep 13 00:54:18.851934 extend-filesystems[1627]: Found sdb6 Sep 13 00:54:18.851934 extend-filesystems[1627]: Found sdb7 Sep 13 00:54:18.851934 extend-filesystems[1627]: Found sdb9 Sep 13 00:54:18.851934 extend-filesystems[1627]: Checking size of /dev/sdb9 Sep 13 00:54:18.851934 extend-filesystems[1627]: Resized partition /dev/sdb9 Sep 13 00:54:18.992535 extend-filesystems[1637]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:54:18.859334 systemd[1]: Starting prepare-helm.service... Sep 13 00:54:18.873053 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:54:18.892092 systemd[1]: Starting sshd-keygen.service... Sep 13 00:54:18.911371 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:54:18.917508 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:19.008848 update_engine[1662]: I0913 00:54:18.985040 1662 main.cc:92] Flatcar Update Engine starting Sep 13 00:54:19.008848 update_engine[1662]: I0913 00:54:18.988685 1662 update_check_scheduler.cc:74] Next update check in 7m36s Sep 13 00:54:18.918503 systemd[1]: Starting tcsd.service... Sep 13 00:54:19.009038 jq[1663]: true Sep 13 00:54:18.935135 systemd[1]: Starting update-engine.service... Sep 13 00:54:18.954067 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:54:18.968390 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:18.969620 systemd[1]: Started dbus.service. Sep 13 00:54:18.986175 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:54:18.986295 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:54:18.986547 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:54:18.986627 systemd[1]: Finished modprobe@drm.service. Sep 13 00:54:19.000702 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:54:19.000828 systemd[1]: Finished motdgen.service. Sep 13 00:54:19.016216 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:54:19.016339 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:54:19.027338 jq[1671]: true Sep 13 00:54:19.027921 systemd[1]: Finished ensure-sysext.service. Sep 13 00:54:19.036319 env[1672]: time="2025-09-13T00:54:19.036263317Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:54:19.040574 tar[1668]: linux-amd64/helm Sep 13 00:54:19.042897 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 13 00:54:19.043025 systemd[1]: Condition check resulted in tcsd.service being skipped. Sep 13 00:54:19.044746 env[1672]: time="2025-09-13T00:54:19.044725519Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:54:19.044816 env[1672]: time="2025-09-13T00:54:19.044806877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:19.045420 env[1672]: time="2025-09-13T00:54:19.045352974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:54:19.045420 env[1672]: time="2025-09-13T00:54:19.045377672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:19.045521 env[1672]: time="2025-09-13T00:54:19.045510496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:54:19.045550 env[1672]: time="2025-09-13T00:54:19.045520625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:19.045550 env[1672]: time="2025-09-13T00:54:19.045527954Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:54:19.045550 env[1672]: time="2025-09-13T00:54:19.045534168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:19.045595 env[1672]: time="2025-09-13T00:54:19.045573178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:19.045710 env[1672]: time="2025-09-13T00:54:19.045701965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:19.045801 env[1672]: time="2025-09-13T00:54:19.045791042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:54:19.045826 env[1672]: time="2025-09-13T00:54:19.045801180Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:54:19.045846 env[1672]: time="2025-09-13T00:54:19.045827610Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:54:19.045846 env[1672]: time="2025-09-13T00:54:19.045835748Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:54:19.046690 systemd[1]: Started update-engine.service. Sep 13 00:54:19.055497 systemd[1]: Started locksmithd.service. Sep 13 00:54:19.062431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:54:19.062445 systemd[1]: Reached target system-config.target. Sep 13 00:54:19.070351 env[1672]: time="2025-09-13T00:54:19.070334708Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070356670Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070370734Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070388243Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070402054Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070413407Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070420617Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070428296Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070436532Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070444722Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070451718Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.070462070Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.075430284Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:54:19.075838 env[1672]: time="2025-09-13T00:54:19.075485758Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:54:19.075693 systemd[1]: Starting systemd-logind.service... Sep 13 00:54:19.076231 env[1672]: time="2025-09-13T00:54:19.075994785Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:54:19.076231 env[1672]: time="2025-09-13T00:54:19.076051240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076231 env[1672]: time="2025-09-13T00:54:19.076064080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:54:19.076231 env[1672]: time="2025-09-13T00:54:19.076172268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076231 env[1672]: time="2025-09-13T00:54:19.076193666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076231 env[1672]: time="2025-09-13T00:54:19.076207559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076231 env[1672]: time="2025-09-13T00:54:19.076222900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076236242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076250273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076262510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076273728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076287819Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076388839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076402911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076415634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076426545Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076443270Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076461943Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:54:19.076482 env[1672]: time="2025-09-13T00:54:19.076478411Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:54:19.076783 env[1672]: time="2025-09-13T00:54:19.076506775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:54:19.076816 env[1672]: time="2025-09-13T00:54:19.076685175Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:54:19.076816 env[1672]: time="2025-09-13T00:54:19.076735397Z" level=info msg="Connect containerd service" Sep 13 00:54:19.076816 env[1672]: time="2025-09-13T00:54:19.076763264Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:54:19.082447 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:54:19.082470 systemd[1]: Reached target user-config.target. Sep 13 00:54:19.085407 env[1672]: time="2025-09-13T00:54:19.085393519Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:54:19.085509 env[1672]: time="2025-09-13T00:54:19.085485320Z" level=info msg="Start subscribing containerd event" Sep 13 00:54:19.085543 env[1672]: time="2025-09-13T00:54:19.085522207Z" level=info msg="Start recovering state" Sep 13 00:54:19.085574 env[1672]: time="2025-09-13T00:54:19.085556687Z" level=info msg="Start event monitor" Sep 13 00:54:19.085574 env[1672]: time="2025-09-13T00:54:19.085559250Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:54:19.085574 env[1672]: time="2025-09-13T00:54:19.085565853Z" level=info msg="Start snapshots syncer" Sep 13 00:54:19.085653 env[1672]: time="2025-09-13T00:54:19.085575390Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:54:19.085653 env[1672]: time="2025-09-13T00:54:19.085579432Z" level=info msg="Start streaming server" Sep 13 00:54:19.085653 env[1672]: time="2025-09-13T00:54:19.085600026Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:54:19.085653 env[1672]: time="2025-09-13T00:54:19.085644427Z" level=info msg="containerd successfully booted in 0.049780s" Sep 13 00:54:19.090546 systemd[1]: Started containerd.service. Sep 13 00:54:19.094514 bash[1705]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:54:19.097631 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:54:19.102915 systemd-logind[1709]: Watching system buttons on /dev/input/event3 (Power Button) Sep 13 00:54:19.102926 systemd-logind[1709]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:54:19.102936 systemd-logind[1709]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 13 00:54:19.103036 systemd-logind[1709]: New seat seat0. Sep 13 00:54:19.107668 systemd[1]: Started systemd-logind.service. Sep 13 00:54:19.123602 locksmithd[1708]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:54:19.313994 tar[1668]: linux-amd64/LICENSE Sep 13 00:54:19.313994 tar[1668]: linux-amd64/README.md Sep 13 00:54:19.316967 systemd[1]: Finished prepare-helm.service. Sep 13 00:54:19.356386 sshd_keygen[1659]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:54:19.368011 systemd[1]: Finished sshd-keygen.service. Sep 13 00:54:19.375483 systemd[1]: Starting issuegen.service... Sep 13 00:54:19.382630 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:54:19.382741 systemd[1]: Finished issuegen.service. Sep 13 00:54:19.391330 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:54:19.399748 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:54:19.408225 systemd[1]: Started getty@tty1.service. Sep 13 00:54:19.416149 systemd[1]: Started serial-getty@ttyS1.service. Sep 13 00:54:19.424600 systemd[1]: Reached target getty.target. Sep 13 00:54:19.647217 systemd-networkd[1410]: bond0: Gained IPv6LL Sep 13 00:54:19.651010 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:54:19.661814 systemd[1]: Reached target network-online.target. Sep 13 00:54:19.674253 systemd[1]: Starting kubelet.service... Sep 13 00:54:20.193456 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Sep 13 00:54:20.226004 extend-filesystems[1637]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Sep 13 00:54:20.226004 extend-filesystems[1637]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 13 00:54:20.226004 extend-filesystems[1637]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Sep 13 00:54:20.263461 extend-filesystems[1627]: Resized filesystem in /dev/sdb9 Sep 13 00:54:20.226565 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:54:20.226723 systemd[1]: Finished extend-filesystems.service. Sep 13 00:54:20.558857 systemd[1]: Started kubelet.service. Sep 13 00:54:20.951468 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Sep 13 00:54:21.079108 kubelet[1757]: E0913 00:54:21.079084 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:54:21.080146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:54:21.080234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:54:24.002538 coreos-metadata[1619]: Sep 13 00:54:24.002 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Sep 13 00:54:24.003293 coreos-metadata[1622]: Sep 13 00:54:24.002 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Sep 13 00:54:24.439681 login[1745]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:54:24.445086 login[1744]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:54:24.447865 systemd-logind[1709]: New session 1 of user core. Sep 13 00:54:24.448255 systemd[1]: Created slice user-500.slice. Sep 13 00:54:24.448794 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:54:24.449942 systemd-logind[1709]: New session 2 of user core. Sep 13 00:54:24.454160 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:54:24.454821 systemd[1]: Starting user@500.service... Sep 13 00:54:24.470937 (systemd)[1781]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:24.555646 systemd[1781]: Queued start job for default target default.target. Sep 13 00:54:24.555750 systemd[1781]: Reached target paths.target. Sep 13 00:54:24.555761 systemd[1781]: Reached target sockets.target. Sep 13 00:54:24.555768 systemd[1781]: Reached target timers.target. Sep 13 00:54:24.555775 systemd[1781]: Reached target basic.target. Sep 13 00:54:24.555795 systemd[1781]: Reached target default.target. Sep 13 00:54:24.555809 systemd[1781]: Startup finished in 75ms. Sep 13 00:54:24.555866 systemd[1]: Started user@500.service. Sep 13 00:54:24.556394 systemd[1]: Started session-1.scope. Sep 13 00:54:24.556690 systemd[1]: Started session-2.scope. Sep 13 00:54:25.002750 coreos-metadata[1619]: Sep 13 00:54:25.002 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 13 00:54:25.003542 coreos-metadata[1622]: Sep 13 00:54:25.002 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 13 00:54:25.006767 coreos-metadata[1619]: Sep 13 00:54:25.006 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Sep 13 00:54:25.008985 coreos-metadata[1622]: Sep 13 00:54:25.008 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Sep 13 00:54:26.343173 systemd[1]: Created slice system-sshd.slice. Sep 13 00:54:26.343980 systemd[1]: Started sshd@0-147.75.203.133:22-139.178.89.65:42180.service. Sep 13 00:54:26.385196 sshd[1804]: Accepted publickey for core from 139.178.89.65 port 42180 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 00:54:26.386329 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:26.390419 systemd-logind[1709]: New session 3 of user core. Sep 13 00:54:26.391302 systemd[1]: Started session-3.scope. Sep 13 00:54:26.445610 systemd[1]: Started sshd@1-147.75.203.133:22-139.178.89.65:42182.service. Sep 13 00:54:26.475173 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 42182 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 00:54:26.475855 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:26.478214 systemd-logind[1709]: New session 4 of user core. Sep 13 00:54:26.478617 systemd[1]: Started session-4.scope. Sep 13 00:54:26.530941 sshd[1809]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:26.532511 systemd[1]: Started sshd@2-147.75.203.133:22-139.178.89.65:42190.service. Sep 13 00:54:26.532821 systemd[1]: sshd@1-147.75.203.133:22-139.178.89.65:42182.service: Deactivated successfully. Sep 13 00:54:26.533296 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:54:26.533305 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:54:26.533774 systemd-logind[1709]: Removed session 4. Sep 13 00:54:26.562349 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 42190 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 00:54:26.563269 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:26.566242 systemd-logind[1709]: New session 5 of user core. Sep 13 00:54:26.566889 systemd[1]: Started session-5.scope. Sep 13 00:54:26.620956 sshd[1815]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:26.622184 systemd[1]: sshd@2-147.75.203.133:22-139.178.89.65:42190.service: Deactivated successfully. Sep 13 00:54:26.622709 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:54:26.622714 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:54:26.623116 systemd-logind[1709]: Removed session 5. Sep 13 00:54:27.006976 coreos-metadata[1619]: Sep 13 00:54:27.006 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Sep 13 00:54:27.009168 coreos-metadata[1622]: Sep 13 00:54:27.009 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Sep 13 00:54:28.198310 coreos-metadata[1619]: Sep 13 00:54:28.198 INFO Fetch successful Sep 13 00:54:28.267539 coreos-metadata[1622]: Sep 13 00:54:28.267 INFO Fetch successful Sep 13 00:54:28.281248 unknown[1619]: wrote ssh authorized keys file for user: core Sep 13 00:54:28.293924 update-ssh-keys[1824]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:54:28.294233 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:54:28.303040 systemd[1]: Finished coreos-metadata.service. Sep 13 00:54:28.303909 systemd[1]: Started packet-phone-home.service. Sep 13 00:54:28.304026 systemd[1]: Reached target multi-user.target. Sep 13 00:54:28.304812 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:54:28.308721 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:54:28.308831 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:54:28.308915 systemd[1]: Startup finished in 26.447s (kernel) + 16.484s (userspace) = 42.931s. Sep 13 00:54:28.309224 curl[1832]: % Total % Received % Xferd Average Speed Time Time Time Current Sep 13 00:54:28.309402 curl[1832]: Dload Upload Total Spent Left Speed Sep 13 00:54:28.712030 curl[1832]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Sep 13 00:54:28.713938 systemd[1]: packet-phone-home.service: Deactivated successfully. Sep 13 00:54:31.250259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:54:31.250738 systemd[1]: Stopped kubelet.service. Sep 13 00:54:31.253922 systemd[1]: Starting kubelet.service... Sep 13 00:54:31.472005 systemd[1]: Started kubelet.service. Sep 13 00:54:31.546712 kubelet[1844]: E0913 00:54:31.546599 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:54:31.550234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:54:31.550391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:54:36.628196 systemd[1]: Started sshd@3-147.75.203.133:22-139.178.89.65:50160.service. Sep 13 00:54:36.658110 sshd[1864]: Accepted publickey for core from 139.178.89.65 port 50160 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 00:54:36.661044 sshd[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:36.670984 systemd-logind[1709]: New session 6 of user core. Sep 13 00:54:36.673044 systemd[1]: Started session-6.scope. Sep 13 00:54:36.729072 sshd[1864]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:36.730659 systemd[1]: Started sshd@4-147.75.203.133:22-139.178.89.65:50170.service. Sep 13 00:54:36.730962 systemd[1]: sshd@3-147.75.203.133:22-139.178.89.65:50160.service: Deactivated successfully. Sep 13 00:54:36.731425 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:54:36.731480 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:54:36.731997 systemd-logind[1709]: Removed session 6. Sep 13 00:54:36.760421 sshd[1870]: Accepted publickey for core from 139.178.89.65 port 50170 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 00:54:36.761392 sshd[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:36.764574 systemd-logind[1709]: New session 7 of user core. Sep 13 00:54:36.765261 systemd[1]: Started session-7.scope. Sep 13 00:54:36.817677 sshd[1870]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:36.819126 systemd[1]: Started sshd@5-147.75.203.133:22-139.178.89.65:50184.service. Sep 13 00:54:36.819448 systemd[1]: sshd@4-147.75.203.133:22-139.178.89.65:50170.service: Deactivated successfully. Sep 13 00:54:36.819969 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:54:36.819975 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:54:36.820384 systemd-logind[1709]: Removed session 7. Sep 13 00:54:36.849369 sshd[1877]: Accepted publickey for core from 139.178.89.65 port 50184 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 00:54:36.850233 sshd[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:36.853030 systemd-logind[1709]: New session 8 of user core. Sep 13 00:54:36.853671 systemd[1]: Started session-8.scope. Sep 13 00:54:36.917107 sshd[1877]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:36.922916 systemd[1]: Started sshd@6-147.75.203.133:22-139.178.89.65:50200.service. Sep 13 00:54:36.924476 systemd[1]: sshd@5-147.75.203.133:22-139.178.89.65:50184.service: Deactivated successfully. Sep 13 00:54:36.926762 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:54:36.926824 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:54:36.927543 systemd-logind[1709]: Removed session 8. Sep 13 00:54:36.955785 sshd[1884]: Accepted publickey for core from 139.178.89.65 port 50200 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 00:54:36.956576 sshd[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:36.959260 systemd-logind[1709]: New session 9 of user core. Sep 13 00:54:36.959838 systemd[1]: Started session-9.scope. Sep 13 00:54:37.045008 sudo[1889]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:54:37.045632 sudo[1889]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:54:37.065768 dbus-daemon[1625]: Н\xf3\x8b\xbbU: received setenforce notice (enforcing=687846352) Sep 13 00:54:37.070688 sudo[1889]: pam_unix(sudo:session): session closed for user root Sep 13 00:54:37.075518 sshd[1884]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:37.081574 systemd[1]: Started sshd@7-147.75.203.133:22-139.178.89.65:50206.service. Sep 13 00:54:37.083336 systemd[1]: sshd@6-147.75.203.133:22-139.178.89.65:50200.service: Deactivated successfully. Sep 13 00:54:37.085827 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:54:37.085872 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:54:37.088466 systemd-logind[1709]: Removed session 9. Sep 13 00:54:37.143794 sshd[1891]: Accepted publickey for core from 139.178.89.65 port 50206 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 00:54:37.146216 sshd[1891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:37.153922 systemd-logind[1709]: New session 10 of user core. Sep 13 00:54:37.155510 systemd[1]: Started session-10.scope. Sep 13 00:54:37.227899 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:54:37.228536 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:54:37.235354 sudo[1898]: pam_unix(sudo:session): session closed for user root Sep 13 00:54:37.247184 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:54:37.247811 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:54:37.270771 systemd[1]: Stopping audit-rules.service... Sep 13 00:54:37.272000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:54:37.274224 auditctl[1901]: No rules Sep 13 00:54:37.274988 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:54:37.275577 systemd[1]: Stopped audit-rules.service. Sep 13 00:54:37.279135 systemd[1]: Starting audit-rules.service... Sep 13 00:54:37.279338 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 13 00:54:37.279369 kernel: audit: type=1305 audit(1757724877.272:140): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Sep 13 00:54:37.288439 augenrules[1919]: No rules Sep 13 00:54:37.288771 systemd[1]: Finished audit-rules.service. Sep 13 00:54:37.289141 sudo[1897]: pam_unix(sudo:session): session closed for user root Sep 13 00:54:37.289950 sshd[1891]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:37.291396 systemd[1]: Started sshd@8-147.75.203.133:22-139.178.89.65:50220.service. Sep 13 00:54:37.291849 systemd[1]: sshd@7-147.75.203.133:22-139.178.89.65:50206.service: Deactivated successfully. Sep 13 00:54:37.292287 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:54:37.292324 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:54:37.292700 systemd-logind[1709]: Removed session 10. Sep 13 00:54:37.272000 audit[1901]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5445e210 a2=420 a3=0 items=0 ppid=1 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.325910 kernel: audit: type=1300 audit(1757724877.272:140): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5445e210 a2=420 a3=0 items=0 ppid=1 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.325977 kernel: audit: type=1327 audit(1757724877.272:140): proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:54:37.272000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Sep 13 00:54:37.335438 kernel: audit: type=1131 audit(1757724877.274:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.357902 kernel: audit: type=1130 audit(1757724877.287:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.359488 sshd[1925]: Accepted publickey for core from 139.178.89.65 port 50220 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 00:54:37.361672 sshd[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:37.363824 systemd-logind[1709]: New session 11 of user core. Sep 13 00:54:37.364204 systemd[1]: Started session-11.scope. Sep 13 00:54:37.380345 kernel: audit: type=1106 audit(1757724877.287:143): pid=1897 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.287000 audit[1897]: USER_END pid=1897 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.406445 kernel: audit: type=1104 audit(1757724877.287:144): pid=1897 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.287000 audit[1897]: CRED_DISP pid=1897 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.412150 sudo[1930]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:54:37.412276 sudo[1930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:54:37.430021 kernel: audit: type=1106 audit(1757724877.289:145): pid=1891 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:54:37.289000 audit[1891]: USER_END pid=1891 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:54:37.430584 systemd[1]: Starting docker.service... Sep 13 00:54:37.447577 env[1946]: time="2025-09-13T00:54:37.447536468Z" level=info msg="Starting up" Sep 13 00:54:37.448441 env[1946]: time="2025-09-13T00:54:37.448356746Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:54:37.448441 env[1946]: time="2025-09-13T00:54:37.448369917Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:54:37.448441 env[1946]: time="2025-09-13T00:54:37.448384098Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:54:37.448441 env[1946]: time="2025-09-13T00:54:37.448390217Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:54:37.449240 env[1946]: time="2025-09-13T00:54:37.449230275Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:54:37.449240 env[1946]: time="2025-09-13T00:54:37.449238538Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:54:37.449294 env[1946]: time="2025-09-13T00:54:37.449246853Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:54:37.449294 env[1946]: time="2025-09-13T00:54:37.449253079Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:54:37.289000 audit[1891]: CRED_DISP pid=1891 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:54:37.488073 kernel: audit: type=1104 audit(1757724877.289:146): pid=1891 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:54:37.488120 kernel: audit: type=1130 audit(1757724877.290:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-147.75.203.133:22-139.178.89.65:50220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-147.75.203.133:22-139.178.89.65:50220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-147.75.203.133:22-139.178.89.65:50206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.358000 audit[1925]: USER_ACCT pid=1925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:54:37.360000 audit[1925]: CRED_ACQ pid=1925 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:54:37.360000 audit[1925]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffae08a1a0 a2=3 a3=0 items=0 ppid=1 pid=1925 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.360000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 00:54:37.364000 audit[1925]: USER_START pid=1925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:54:37.365000 audit[1929]: CRED_ACQ pid=1929 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:54:37.410000 audit[1930]: USER_ACCT pid=1930 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.410000 audit[1930]: CRED_REFR pid=1930 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.412000 audit[1930]: USER_START pid=1930 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:54:37.609441 env[1946]: time="2025-09-13T00:54:37.609331420Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 13 00:54:37.609441 env[1946]: time="2025-09-13T00:54:37.609420054Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 13 00:54:37.609844 env[1946]: time="2025-09-13T00:54:37.609783122Z" level=info msg="Loading containers: start." Sep 13 00:54:37.695000 audit[1988]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.695000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff1aeaef80 a2=0 a3=7fff1aeaef6c items=0 ppid=1946 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.695000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Sep 13 00:54:37.696000 audit[1990]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.696000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc06445000 a2=0 a3=7ffc06444fec items=0 ppid=1946 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.696000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Sep 13 00:54:37.697000 audit[1992]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.697000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd5e6e5d30 a2=0 a3=7ffd5e6e5d1c items=0 ppid=1946 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.697000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:54:37.698000 audit[1994]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.698000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff2f178790 a2=0 a3=7fff2f17877c items=0 ppid=1946 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.698000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:54:37.700000 audit[1996]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.700000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc6d370b20 a2=0 a3=7ffc6d370b0c items=0 ppid=1946 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.700000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Sep 13 00:54:37.757000 audit[2001]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.757000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf5bca320 a2=0 a3=7ffcf5bca30c items=0 ppid=1946 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.757000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Sep 13 00:54:37.767000 audit[2003]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.767000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc90441c40 a2=0 a3=7ffc90441c2c items=0 ppid=1946 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.767000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Sep 13 00:54:37.772000 audit[2005]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.772000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff4f456260 a2=0 a3=7fff4f45624c items=0 ppid=1946 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.772000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Sep 13 00:54:37.776000 audit[2007]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.776000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcdbc87870 a2=0 a3=7ffcdbc8785c items=0 ppid=1946 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.776000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:54:37.794000 audit[2011]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.794000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff9bbc12c0 a2=0 a3=7fff9bbc12ac items=0 ppid=1946 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.794000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:54:37.808000 audit[2012]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.808000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff0743aed0 a2=0 a3=7fff0743aebc items=0 ppid=1946 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.808000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:54:37.839417 kernel: Initializing XFRM netlink socket Sep 13 00:54:37.879234 env[1946]: time="2025-09-13T00:54:37.879216571Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:54:37.890000 audit[2020]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.890000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffeda572740 a2=0 a3=7ffeda57272c items=0 ppid=1946 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.890000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Sep 13 00:54:37.928000 audit[2023]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.928000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc417f3320 a2=0 a3=7ffc417f330c items=0 ppid=1946 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.928000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Sep 13 00:54:37.936000 audit[2026]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.936000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd447b3d10 a2=0 a3=7ffd447b3cfc items=0 ppid=1946 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.936000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Sep 13 00:54:37.941000 audit[2028]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.941000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd694402f0 a2=0 a3=7ffd694402dc items=0 ppid=1946 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.941000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Sep 13 00:54:37.946000 audit[2030]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.946000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffdab5eef20 a2=0 a3=7ffdab5eef0c items=0 ppid=1946 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.946000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Sep 13 00:54:37.952000 audit[2032]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.952000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffdc053efc0 a2=0 a3=7ffdc053efac items=0 ppid=1946 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.952000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Sep 13 00:54:37.957000 audit[2034]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.957000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff5dadeb50 a2=0 a3=7fff5dadeb3c items=0 ppid=1946 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.957000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Sep 13 00:54:37.986000 audit[2037]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.986000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd1c607490 a2=0 a3=7ffd1c60747c items=0 ppid=1946 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.986000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Sep 13 00:54:37.992000 audit[2039]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.992000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffde9125c40 a2=0 a3=7ffde9125c2c items=0 ppid=1946 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.992000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Sep 13 00:54:37.997000 audit[2041]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:37.997000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffff093460 a2=0 a3=7fffff09344c items=0 ppid=1946 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:37.997000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Sep 13 00:54:38.002000 audit[2043]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:38.002000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdf3fa9560 a2=0 a3=7ffdf3fa954c items=0 ppid=1946 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.002000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Sep 13 00:54:38.004818 systemd-networkd[1410]: docker0: Link UP Sep 13 00:54:38.018000 audit[2047]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:38.018000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc0f001180 a2=0 a3=7ffc0f00116c items=0 ppid=1946 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.018000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:54:38.040000 audit[2048]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:38.040000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc41d55790 a2=0 a3=7ffc41d5577c items=0 ppid=1946 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:38.040000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Sep 13 00:54:38.042641 env[1946]: time="2025-09-13T00:54:38.042536827Z" level=info msg="Loading containers: done." Sep 13 00:54:38.064249 env[1946]: time="2025-09-13T00:54:38.064143671Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:54:38.064579 env[1946]: time="2025-09-13T00:54:38.064546305Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:54:38.064839 env[1946]: time="2025-09-13T00:54:38.064758784Z" level=info msg="Daemon has completed initialization" Sep 13 00:54:38.089891 systemd[1]: Started docker.service. Sep 13 00:54:38.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:38.105464 env[1946]: time="2025-09-13T00:54:38.105329124Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:54:39.132537 env[1672]: time="2025-09-13T00:54:39.132431148Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:54:39.971677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774778091.mount: Deactivated successfully. Sep 13 00:54:40.998584 env[1672]: time="2025-09-13T00:54:40.998557312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:40.999184 env[1672]: time="2025-09-13T00:54:40.999172809Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:41.000756 env[1672]: time="2025-09-13T00:54:41.000699838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:41.001674 env[1672]: time="2025-09-13T00:54:41.001630033Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:41.002139 env[1672]: time="2025-09-13T00:54:41.002124726Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:54:41.002620 env[1672]: time="2025-09-13T00:54:41.002576733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:54:41.749493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:54:41.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:41.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:41.749661 systemd[1]: Stopped kubelet.service. Sep 13 00:54:41.750852 systemd[1]: Starting kubelet.service... Sep 13 00:54:42.080878 systemd[1]: Started kubelet.service. Sep 13 00:54:42.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:42.105713 kubelet[2106]: E0913 00:54:42.105652 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:54:42.107217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:54:42.107300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:54:42.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:54:42.401755 env[1672]: time="2025-09-13T00:54:42.401673218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:42.402549 env[1672]: time="2025-09-13T00:54:42.402506593Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:42.403553 env[1672]: time="2025-09-13T00:54:42.403513489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:42.404496 env[1672]: time="2025-09-13T00:54:42.404443179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:42.404967 env[1672]: time="2025-09-13T00:54:42.404922557Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:54:42.405340 env[1672]: time="2025-09-13T00:54:42.405304681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:54:43.483421 env[1672]: time="2025-09-13T00:54:43.483340834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:43.484017 env[1672]: time="2025-09-13T00:54:43.483971415Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:43.485235 env[1672]: time="2025-09-13T00:54:43.485194505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:43.486196 env[1672]: time="2025-09-13T00:54:43.486155533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:43.486711 env[1672]: time="2025-09-13T00:54:43.486670072Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:54:43.487139 env[1672]: time="2025-09-13T00:54:43.487084468Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:54:44.464328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175308990.mount: Deactivated successfully. Sep 13 00:54:44.885322 env[1672]: time="2025-09-13T00:54:44.885268292Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.885864 env[1672]: time="2025-09-13T00:54:44.885819191Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.886526 env[1672]: time="2025-09-13T00:54:44.886484460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.887170 env[1672]: time="2025-09-13T00:54:44.887131246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.887703 env[1672]: time="2025-09-13T00:54:44.887670394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:54:44.888295 env[1672]: time="2025-09-13T00:54:44.888282509Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:54:45.496576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265788071.mount: Deactivated successfully. Sep 13 00:54:46.221952 env[1672]: time="2025-09-13T00:54:46.221887982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:46.222654 env[1672]: time="2025-09-13T00:54:46.222610301Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:46.223971 env[1672]: time="2025-09-13T00:54:46.223929892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:46.224931 env[1672]: time="2025-09-13T00:54:46.224875259Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:46.225387 env[1672]: time="2025-09-13T00:54:46.225328231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:54:46.225736 env[1672]: time="2025-09-13T00:54:46.225723032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:54:46.846567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681744476.mount: Deactivated successfully. Sep 13 00:54:46.865107 env[1672]: time="2025-09-13T00:54:46.865063011Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:46.865741 env[1672]: time="2025-09-13T00:54:46.865703288Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:46.866395 env[1672]: time="2025-09-13T00:54:46.866377038Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:46.867393 env[1672]: time="2025-09-13T00:54:46.867363517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:46.867592 env[1672]: time="2025-09-13T00:54:46.867553179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:54:46.867823 env[1672]: time="2025-09-13T00:54:46.867809685Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:54:47.373666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount664920810.mount: Deactivated successfully. Sep 13 00:54:49.034254 env[1672]: time="2025-09-13T00:54:49.034226988Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.034896 env[1672]: time="2025-09-13T00:54:49.034883896Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.036111 env[1672]: time="2025-09-13T00:54:49.036066499Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.037476 env[1672]: time="2025-09-13T00:54:49.037376261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:49.037923 env[1672]: time="2025-09-13T00:54:49.037882764Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:54:51.004862 systemd[1]: Stopped kubelet.service. Sep 13 00:54:51.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:51.006282 systemd[1]: Starting kubelet.service... Sep 13 00:54:51.010558 kernel: kauditd_printk_skb: 88 callbacks suppressed Sep 13 00:54:51.010601 kernel: audit: type=1130 audit(1757724891.003:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:51.023001 systemd[1]: Reloading. Sep 13 00:54:51.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:51.058160 /usr/lib/systemd/system-generators/torcx-generator[2198]: time="2025-09-13T00:54:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:54:51.058183 /usr/lib/systemd/system-generators/torcx-generator[2198]: time="2025-09-13T00:54:51Z" level=info msg="torcx already run" Sep 13 00:54:51.088355 kernel: audit: type=1131 audit(1757724891.003:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:51.114558 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:51.114566 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:51.125967 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:51.205686 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:54:51.205728 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:54:51.205877 systemd[1]: Stopped kubelet.service. Sep 13 00:54:51.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:54:51.206782 systemd[1]: Starting kubelet.service... Sep 13 00:54:51.262437 kernel: audit: type=1130 audit(1757724891.204:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 13 00:54:51.424795 systemd[1]: Started kubelet.service. Sep 13 00:54:51.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:51.444326 kubelet[2272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:51.444326 kubelet[2272]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:54:51.444326 kubelet[2272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:51.444596 kubelet[2272]: I0913 00:54:51.444330 2272 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:54:51.484435 kernel: audit: type=1130 audit(1757724891.423:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:51.731130 kubelet[2272]: I0913 00:54:51.731075 2272 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:54:51.731130 kubelet[2272]: I0913 00:54:51.731090 2272 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:54:51.731305 kubelet[2272]: I0913 00:54:51.731296 2272 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:54:51.771320 kubelet[2272]: E0913 00:54:51.771275 2272 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.203.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:51.771896 kubelet[2272]: I0913 00:54:51.771852 2272 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:54:51.775625 kubelet[2272]: E0913 00:54:51.775582 2272 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:54:51.775625 kubelet[2272]: I0913 00:54:51.775595 2272 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:54:51.796614 kubelet[2272]: I0913 00:54:51.796576 2272 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:54:51.797347 kubelet[2272]: I0913 00:54:51.797307 2272 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:54:51.797402 kubelet[2272]: I0913 00:54:51.797380 2272 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:54:51.797531 kubelet[2272]: I0913 00:54:51.797399 2272 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-d04f0c45dd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:54:51.797531 kubelet[2272]: I0913 00:54:51.797509 2272 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:54:51.797531 kubelet[2272]: I0913 00:54:51.797516 2272 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:54:51.797678 kubelet[2272]: I0913 00:54:51.797588 2272 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:51.799993 kubelet[2272]: I0913 00:54:51.799955 2272 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:54:51.799993 kubelet[2272]: I0913 00:54:51.799967 2272 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:54:51.799993 kubelet[2272]: I0913 00:54:51.799986 2272 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:54:51.799993 kubelet[2272]: I0913 00:54:51.799997 2272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:54:51.812465 kubelet[2272]: W0913 00:54:51.812368 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.203.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-d04f0c45dd&limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 00:54:51.812465 kubelet[2272]: E0913 00:54:51.812455 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.203.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-d04f0c45dd&limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:51.813184 kubelet[2272]: W0913 00:54:51.813123 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.203.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 00:54:51.813184 kubelet[2272]: E0913 00:54:51.813174 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.203.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:51.817105 kubelet[2272]: I0913 00:54:51.817062 2272 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:54:51.817536 kubelet[2272]: I0913 00:54:51.817497 2272 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:54:51.817611 kubelet[2272]: W0913 00:54:51.817543 2272 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:54:51.822519 kubelet[2272]: I0913 00:54:51.822499 2272 server.go:1274] "Started kubelet" Sep 13 00:54:51.822666 kubelet[2272]: I0913 00:54:51.822609 2272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:54:51.822666 kubelet[2272]: I0913 00:54:51.822638 2272 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:54:51.822902 kubelet[2272]: I0913 00:54:51.822886 2272 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:54:51.822000 audit[2272]: AVC avc: denied { mac_admin } for pid=2272 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:51.823722 kubelet[2272]: I0913 00:54:51.823631 2272 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:54:51.823722 kubelet[2272]: I0913 00:54:51.823696 2272 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:54:51.823824 kubelet[2272]: I0913 00:54:51.823776 2272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:54:51.823909 kubelet[2272]: I0913 00:54:51.823888 2272 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:54:51.823963 kubelet[2272]: I0913 00:54:51.823912 2272 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:54:51.823963 kubelet[2272]: I0913 00:54:51.823930 2272 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:54:51.824080 kubelet[2272]: E0913 00:54:51.823976 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:51.824080 kubelet[2272]: I0913 00:54:51.823895 2272 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:54:51.824191 kubelet[2272]: I0913 00:54:51.824130 2272 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:54:51.824253 kubelet[2272]: E0913 00:54:51.824178 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-d04f0c45dd?timeout=10s\": dial tcp 147.75.203.133:6443: connect: connection refused" interval="200ms" Sep 13 00:54:51.824313 kubelet[2272]: W0913 00:54:51.824252 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.203.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 00:54:51.824384 kubelet[2272]: E0913 00:54:51.824329 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.203.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:51.836685 kubelet[2272]: I0913 00:54:51.836666 2272 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:54:51.836791 kubelet[2272]: I0913 00:54:51.836772 2272 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:54:51.837708 kubelet[2272]: I0913 00:54:51.837695 2272 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:54:51.853787 kubelet[2272]: E0913 00:54:51.835578 2272 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.203.133:6443/api/v1/namespaces/default/events\": dial tcp 147.75.203.133:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-d04f0c45dd.1864b179edf0b390 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-d04f0c45dd,UID:ci-3510.3.8-n-d04f0c45dd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-d04f0c45dd,},FirstTimestamp:2025-09-13 00:54:51.822470032 +0000 UTC m=+0.394910628,LastTimestamp:2025-09-13 00:54:51.822470032 +0000 UTC m=+0.394910628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-d04f0c45dd,}" Sep 13 00:54:51.854931 kubelet[2272]: E0913 00:54:51.854909 2272 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:54:51.867013 kubelet[2272]: I0913 00:54:51.867002 2272 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:54:51.867013 kubelet[2272]: I0913 00:54:51.867012 2272 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:54:51.867097 kubelet[2272]: I0913 00:54:51.867022 2272 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:51.868059 kubelet[2272]: I0913 00:54:51.868051 2272 policy_none.go:49] "None policy: Start" Sep 13 00:54:51.868283 kubelet[2272]: I0913 00:54:51.868276 2272 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:54:51.868314 kubelet[2272]: I0913 00:54:51.868287 2272 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:54:51.822000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:51.912495 kernel: audit: type=1400 audit(1757724891.822:190): avc: denied { mac_admin } for pid=2272 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:51.912528 kernel: audit: type=1401 audit(1757724891.822:190): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:51.912550 kernel: audit: type=1300 audit(1757724891.822:190): arch=c000003e syscall=188 success=no exit=-22 a0=c000f80300 a1=c000845710 a2=c000f802d0 a3=25 items=0 ppid=1 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.822000 audit[2272]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f80300 a1=c000845710 a2=c000f802d0 a3=25 items=0 ppid=1 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.924678 kubelet[2272]: E0913 00:54:51.924666 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:52.003555 kernel: audit: type=1327 audit(1757724891.822:190): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:51.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:52.024760 kubelet[2272]: E0913 00:54:52.024727 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:52.024920 kubelet[2272]: E0913 00:54:52.024904 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-d04f0c45dd?timeout=10s\": dial tcp 147.75.203.133:6443: connect: connection refused" interval="400ms" Sep 13 00:54:52.094224 kernel: audit: type=1400 audit(1757724891.822:191): avc: denied { mac_admin } for pid=2272 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:51.822000 audit[2272]: AVC avc: denied { mac_admin } for pid=2272 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:52.095535 kubelet[2272]: I0913 00:54:52.095523 2272 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:54:52.095579 kubelet[2272]: I0913 00:54:52.095561 2272 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:54:52.095635 kubelet[2272]: I0913 00:54:52.095628 2272 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:54:52.095675 kubelet[2272]: I0913 00:54:52.095636 2272 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:54:52.095764 kubelet[2272]: I0913 00:54:52.095755 2272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:54:52.096152 kubelet[2272]: E0913 00:54:52.096144 2272 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:52.157530 kernel: audit: type=1401 audit(1757724891.822:191): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:51.822000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:52.158706 kubelet[2272]: I0913 00:54:52.158689 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:54:52.159250 kubelet[2272]: I0913 00:54:52.159240 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:54:52.159280 kubelet[2272]: I0913 00:54:52.159252 2272 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:54:52.159280 kubelet[2272]: I0913 00:54:52.159265 2272 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:54:52.159326 kubelet[2272]: E0913 00:54:52.159286 2272 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:54:52.159531 kubelet[2272]: W0913 00:54:52.159519 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.203.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 00:54:52.159561 kubelet[2272]: E0913 00:54:52.159542 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.203.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:51.822000 audit[2272]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006ab800 a1=c000845728 a2=c000f80390 a3=25 items=0 ppid=1 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:51.825000 audit[2297]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:51.825000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc49783aa0 a2=0 a3=7ffc49783a8c items=0 ppid=2272 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:54:51.826000 audit[2298]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:51.826000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3b1afff0 a2=0 a3=7ffd3b1affdc items=0 ppid=2272 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:54:51.827000 audit[2300]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:51.827000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff1e215a70 a2=0 a3=7fff1e215a5c items=0 ppid=2272 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.827000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:54:51.829000 audit[2302]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:51.829000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff6b899060 a2=0 a3=7fff6b89904c items=0 ppid=2272 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:51.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:54:52.094000 audit[2272]: AVC avc: denied { mac_admin } for pid=2272 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:52.094000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:52.094000 audit[2272]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e6d8f0 a1=c000b75848 a2=c000e6d8c0 a3=25 items=0 ppid=1 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:52.157000 audit[2308]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:52.157000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdaf7eab70 a2=0 a3=7ffdaf7eab5c items=0 ppid=2272 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Sep 13 00:54:52.157000 audit[2309]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:52.157000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcdd67c540 a2=0 a3=7ffcdd67c52c items=0 ppid=2272 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Sep 13 00:54:52.157000 audit[2310]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:52.157000 audit[2310]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe67397040 a2=0 a3=7ffe6739702c items=0 ppid=2272 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:54:52.158000 audit[2311]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:52.158000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6a978dd0 a2=0 a3=7ffd6a978dbc items=0 ppid=2272 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Sep 13 00:54:52.158000 audit[2312]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:52.158000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd90a259d0 a2=0 a3=7ffd90a259bc items=0 ppid=2272 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:54:52.158000 audit[2313]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:52.158000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc1064b970 a2=0 a3=7ffc1064b95c items=0 ppid=2272 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Sep 13 00:54:52.158000 audit[2314]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:54:52.158000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc28f1d80 a2=0 a3=7fffc28f1d6c items=0 ppid=2272 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:54:52.159000 audit[2315]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:54:52.159000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc1afff8e0 a2=0 a3=7ffc1afff8cc items=0 ppid=2272 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:52.159000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Sep 13 00:54:52.196841 kubelet[2272]: I0913 00:54:52.196801 2272 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.197013 kubelet[2272]: E0913 00:54:52.196970 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.203.133:6443/api/v1/nodes\": dial tcp 147.75.203.133:6443: connect: connection refused" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.326398 kubelet[2272]: I0913 00:54:52.326196 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc9d0822e7f30297f426b31a4471ac84-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-d04f0c45dd\" (UID: \"fc9d0822e7f30297f426b31a4471ac84\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.326398 kubelet[2272]: I0913 00:54:52.326280 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc9d0822e7f30297f426b31a4471ac84-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-d04f0c45dd\" (UID: \"fc9d0822e7f30297f426b31a4471ac84\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.326398 kubelet[2272]: I0913 00:54:52.326339 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c3eeafda6ff624b366915d6245bd6b3-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-d04f0c45dd\" (UID: \"8c3eeafda6ff624b366915d6245bd6b3\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.326398 kubelet[2272]: I0913 00:54:52.326400 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc9d0822e7f30297f426b31a4471ac84-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-d04f0c45dd\" (UID: \"fc9d0822e7f30297f426b31a4471ac84\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.326915 kubelet[2272]: I0913 00:54:52.326446 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.326915 kubelet[2272]: I0913 00:54:52.326492 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.326915 kubelet[2272]: I0913 00:54:52.326536 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.326915 kubelet[2272]: I0913 00:54:52.326577 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.326915 kubelet[2272]: I0913 00:54:52.326621 2272 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.400483 kubelet[2272]: I0913 00:54:52.400431 2272 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.401117 kubelet[2272]: E0913 00:54:52.401022 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.203.133:6443/api/v1/nodes\": dial tcp 147.75.203.133:6443: connect: connection refused" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.426138 kubelet[2272]: E0913 00:54:52.426051 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-d04f0c45dd?timeout=10s\": dial tcp 147.75.203.133:6443: connect: connection refused" interval="800ms" Sep 13 00:54:52.567148 env[1672]: time="2025-09-13T00:54:52.567023109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-d04f0c45dd,Uid:fc9d0822e7f30297f426b31a4471ac84,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:52.569015 env[1672]: time="2025-09-13T00:54:52.568886295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-d04f0c45dd,Uid:117c1493503282af7efed6bf6aeb2ab3,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:52.569794 env[1672]: time="2025-09-13T00:54:52.569687667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-d04f0c45dd,Uid:8c3eeafda6ff624b366915d6245bd6b3,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:52.733293 kubelet[2272]: W0913 00:54:52.733053 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.203.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 00:54:52.733293 kubelet[2272]: E0913 00:54:52.733190 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.203.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:52.804409 kubelet[2272]: I0913 00:54:52.804313 2272 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.805053 kubelet[2272]: E0913 00:54:52.804948 2272 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.203.133:6443/api/v1/nodes\": dial tcp 147.75.203.133:6443: connect: connection refused" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:52.948352 kubelet[2272]: W0913 00:54:52.948216 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.203.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 00:54:52.948549 kubelet[2272]: E0913 00:54:52.948348 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.203.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:52.995816 kubelet[2272]: W0913 00:54:52.995595 2272 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.203.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-d04f0c45dd&limit=500&resourceVersion=0": dial tcp 147.75.203.133:6443: connect: connection refused Sep 13 00:54:52.995816 kubelet[2272]: E0913 00:54:52.995715 2272 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.203.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-d04f0c45dd&limit=500&resourceVersion=0\": dial tcp 147.75.203.133:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:53.015950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666371340.mount: Deactivated successfully. Sep 13 00:54:53.016913 env[1672]: time="2025-09-13T00:54:53.016869300Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.018038 env[1672]: time="2025-09-13T00:54:53.018000142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.018569 env[1672]: time="2025-09-13T00:54:53.018521695Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.019293 env[1672]: time="2025-09-13T00:54:53.019247321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.019710 env[1672]: time="2025-09-13T00:54:53.019666803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.020131 env[1672]: time="2025-09-13T00:54:53.020076714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.021791 env[1672]: time="2025-09-13T00:54:53.021747418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.022171 env[1672]: time="2025-09-13T00:54:53.022129936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.023859 env[1672]: time="2025-09-13T00:54:53.023819106Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.024747 env[1672]: time="2025-09-13T00:54:53.024694577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.025107 env[1672]: time="2025-09-13T00:54:53.025067527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.041415 env[1672]: time="2025-09-13T00:54:53.041377974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:53.049421 env[1672]: time="2025-09-13T00:54:53.049349529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:53.049421 env[1672]: time="2025-09-13T00:54:53.049377565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:53.049421 env[1672]: time="2025-09-13T00:54:53.049384797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:53.049538 env[1672]: time="2025-09-13T00:54:53.049455266Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be41dd5108baeb36968fdbecf82fda75313df26d886e0d6eaa7803351ee96ad8 pid=2327 runtime=io.containerd.runc.v2 Sep 13 00:54:53.049619 env[1672]: time="2025-09-13T00:54:53.049590898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:53.049663 env[1672]: time="2025-09-13T00:54:53.049612486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:53.049663 env[1672]: time="2025-09-13T00:54:53.049625673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:53.049732 env[1672]: time="2025-09-13T00:54:53.049713878Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc60f8d390eca3a0012f28f3608d5f1ed1f260a070f0bc55f9201ff8155049ca pid=2330 runtime=io.containerd.runc.v2 Sep 13 00:54:53.051202 env[1672]: time="2025-09-13T00:54:53.051162654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:53.051202 env[1672]: time="2025-09-13T00:54:53.051192482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:53.051307 env[1672]: time="2025-09-13T00:54:53.051207872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:53.051330 env[1672]: time="2025-09-13T00:54:53.051305237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6ddacb579208ab11edb47d7f9694b6fc8226dd1bba8aa3cf5f239020bec5c94 pid=2355 runtime=io.containerd.runc.v2 Sep 13 00:54:53.078179 env[1672]: time="2025-09-13T00:54:53.078140841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-d04f0c45dd,Uid:117c1493503282af7efed6bf6aeb2ab3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc60f8d390eca3a0012f28f3608d5f1ed1f260a070f0bc55f9201ff8155049ca\"" Sep 13 00:54:53.078301 env[1672]: time="2025-09-13T00:54:53.078200380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-d04f0c45dd,Uid:8c3eeafda6ff624b366915d6245bd6b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"be41dd5108baeb36968fdbecf82fda75313df26d886e0d6eaa7803351ee96ad8\"" Sep 13 00:54:53.078897 env[1672]: time="2025-09-13T00:54:53.078855164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-d04f0c45dd,Uid:fc9d0822e7f30297f426b31a4471ac84,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ddacb579208ab11edb47d7f9694b6fc8226dd1bba8aa3cf5f239020bec5c94\"" Sep 13 00:54:53.080275 env[1672]: time="2025-09-13T00:54:53.080255900Z" level=info msg="CreateContainer within sandbox \"be41dd5108baeb36968fdbecf82fda75313df26d886e0d6eaa7803351ee96ad8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:54:53.080328 env[1672]: time="2025-09-13T00:54:53.080313214Z" level=info msg="CreateContainer within sandbox \"bc60f8d390eca3a0012f28f3608d5f1ed1f260a070f0bc55f9201ff8155049ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:54:53.080354 env[1672]: time="2025-09-13T00:54:53.080256262Z" level=info msg="CreateContainer within sandbox \"d6ddacb579208ab11edb47d7f9694b6fc8226dd1bba8aa3cf5f239020bec5c94\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:54:53.086748 env[1672]: time="2025-09-13T00:54:53.086703510Z" level=info msg="CreateContainer within sandbox \"bc60f8d390eca3a0012f28f3608d5f1ed1f260a070f0bc55f9201ff8155049ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9156a9bd2affaa2ae7e3cafb4fdc9c641d47450ebd018fa4431e527beb1e5e80\"" Sep 13 00:54:53.086966 env[1672]: time="2025-09-13T00:54:53.086938897Z" level=info msg="StartContainer for \"9156a9bd2affaa2ae7e3cafb4fdc9c641d47450ebd018fa4431e527beb1e5e80\"" Sep 13 00:54:53.088379 env[1672]: time="2025-09-13T00:54:53.088320770Z" level=info msg="CreateContainer within sandbox \"d6ddacb579208ab11edb47d7f9694b6fc8226dd1bba8aa3cf5f239020bec5c94\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"535b5837c8d32d386b976ac63869d3ef7f432489d2da6f930a0eed4284f86951\"" Sep 13 00:54:53.088503 env[1672]: time="2025-09-13T00:54:53.088477113Z" level=info msg="CreateContainer within sandbox \"be41dd5108baeb36968fdbecf82fda75313df26d886e0d6eaa7803351ee96ad8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"35174e2865a7d9c422537386f826b709a2b57bea02ddd50ae362d82f2c3f8580\"" Sep 13 00:54:53.088550 env[1672]: time="2025-09-13T00:54:53.088499969Z" level=info msg="StartContainer for \"535b5837c8d32d386b976ac63869d3ef7f432489d2da6f930a0eed4284f86951\"" Sep 13 00:54:53.088670 env[1672]: time="2025-09-13T00:54:53.088656152Z" level=info msg="StartContainer for \"35174e2865a7d9c422537386f826b709a2b57bea02ddd50ae362d82f2c3f8580\"" Sep 13 00:54:53.120263 env[1672]: time="2025-09-13T00:54:53.120233557Z" level=info msg="StartContainer for \"9156a9bd2affaa2ae7e3cafb4fdc9c641d47450ebd018fa4431e527beb1e5e80\" returns successfully" Sep 13 00:54:53.120953 env[1672]: time="2025-09-13T00:54:53.120938425Z" level=info msg="StartContainer for \"535b5837c8d32d386b976ac63869d3ef7f432489d2da6f930a0eed4284f86951\" returns successfully" Sep 13 00:54:53.121110 env[1672]: time="2025-09-13T00:54:53.121096116Z" level=info msg="StartContainer for \"35174e2865a7d9c422537386f826b709a2b57bea02ddd50ae362d82f2c3f8580\" returns successfully" Sep 13 00:54:53.607088 kubelet[2272]: I0913 00:54:53.607069 2272 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:53.709193 kubelet[2272]: E0913 00:54:53.709147 2272 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-d04f0c45dd\" not found" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:53.759505 kubelet[2272]: E0913 00:54:53.759426 2272 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-d04f0c45dd.1864b179edf0b390 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-d04f0c45dd,UID:ci-3510.3.8-n-d04f0c45dd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-d04f0c45dd,},FirstTimestamp:2025-09-13 00:54:51.822470032 +0000 UTC m=+0.394910628,LastTimestamp:2025-09-13 00:54:51.822470032 +0000 UTC m=+0.394910628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-d04f0c45dd,}" Sep 13 00:54:53.811355 kubelet[2272]: E0913 00:54:53.811299 2272 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-d04f0c45dd.1864b179ee04d2c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-d04f0c45dd,UID:ci-3510.3.8-n-d04f0c45dd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-d04f0c45dd,},FirstTimestamp:2025-09-13 00:54:51.82378874 +0000 UTC m=+0.396229353,LastTimestamp:2025-09-13 00:54:51.82378874 +0000 UTC m=+0.396229353,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-d04f0c45dd,}" Sep 13 00:54:53.814162 kubelet[2272]: I0913 00:54:53.814126 2272 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:53.814162 kubelet[2272]: E0913 00:54:53.814141 2272 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-d04f0c45dd\": node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:53.819177 kubelet[2272]: E0913 00:54:53.819136 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:53.864550 kubelet[2272]: E0913 00:54:53.864418 2272 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-d04f0c45dd.1864b179efdf797b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-d04f0c45dd,UID:ci-3510.3.8-n-d04f0c45dd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-d04f0c45dd,},FirstTimestamp:2025-09-13 00:54:51.854895483 +0000 UTC m=+0.427336084,LastTimestamp:2025-09-13 00:54:51.854895483 +0000 UTC m=+0.427336084,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-d04f0c45dd,}" Sep 13 00:54:53.919539 kubelet[2272]: E0913 00:54:53.919471 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:53.920147 kubelet[2272]: E0913 00:54:53.919892 2272 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.8-n-d04f0c45dd.1864b179f0936514 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-d04f0c45dd,UID:ci-3510.3.8-n-d04f0c45dd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-3510.3.8-n-d04f0c45dd status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-d04f0c45dd,},FirstTimestamp:2025-09-13 00:54:51.86668674 +0000 UTC m=+0.439127329,LastTimestamp:2025-09-13 00:54:51.86668674 +0000 UTC m=+0.439127329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-d04f0c45dd,}" Sep 13 00:54:54.019809 kubelet[2272]: E0913 00:54:54.019775 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:54.120681 kubelet[2272]: E0913 00:54:54.120515 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:54.221408 kubelet[2272]: E0913 00:54:54.221308 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:54.322038 kubelet[2272]: E0913 00:54:54.321950 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:54.423239 kubelet[2272]: E0913 00:54:54.423069 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:54.524241 kubelet[2272]: E0913 00:54:54.524149 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:54.625030 kubelet[2272]: E0913 00:54:54.624933 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:54.725886 kubelet[2272]: E0913 00:54:54.725721 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:54.826193 kubelet[2272]: E0913 00:54:54.826140 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:54.927177 kubelet[2272]: E0913 00:54:54.927126 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:55.028122 kubelet[2272]: E0913 00:54:55.028066 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:55.129286 kubelet[2272]: E0913 00:54:55.129222 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:55.229679 kubelet[2272]: E0913 00:54:55.229659 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:55.330727 kubelet[2272]: E0913 00:54:55.330578 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:55.431407 kubelet[2272]: E0913 00:54:55.431343 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:55.532421 kubelet[2272]: E0913 00:54:55.532347 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:55.632872 kubelet[2272]: E0913 00:54:55.632693 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:55.733837 kubelet[2272]: E0913 00:54:55.733754 2272 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:56.392196 systemd[1]: Reloading. Sep 13 00:54:56.421712 /usr/lib/systemd/system-generators/torcx-generator[2602]: time="2025-09-13T00:54:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:54:56.421728 /usr/lib/systemd/system-generators/torcx-generator[2602]: time="2025-09-13T00:54:56Z" level=info msg="torcx already run" Sep 13 00:54:56.502300 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:56.502314 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:56.517759 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:56.577489 systemd[1]: Stopping kubelet.service... Sep 13 00:54:56.577636 kubelet[2272]: I0913 00:54:56.577512 2272 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:54:56.599811 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:54:56.599946 systemd[1]: Stopped kubelet.service. Sep 13 00:54:56.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:56.600888 systemd[1]: Starting kubelet.service... Sep 13 00:54:56.626735 kernel: kauditd_printk_skb: 42 callbacks suppressed Sep 13 00:54:56.626779 kernel: audit: type=1131 audit(1757724896.598:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:56.836382 systemd[1]: Started kubelet.service. Sep 13 00:54:56.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:56.855682 kubelet[2677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:56.855682 kubelet[2677]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:54:56.855682 kubelet[2677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:56.855945 kubelet[2677]: I0913 00:54:56.855715 2677 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:54:56.858946 kubelet[2677]: I0913 00:54:56.858935 2677 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:54:56.858946 kubelet[2677]: I0913 00:54:56.858946 2677 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:54:56.859100 kubelet[2677]: I0913 00:54:56.859068 2677 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:54:56.859800 kubelet[2677]: I0913 00:54:56.859769 2677 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:54:56.861170 kubelet[2677]: I0913 00:54:56.861151 2677 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:54:56.863014 kubelet[2677]: E0913 00:54:56.862998 2677 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:54:56.863014 kubelet[2677]: I0913 00:54:56.863013 2677 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:54:56.901368 kernel: audit: type=1130 audit(1757724896.835:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:56.906522 kubelet[2677]: I0913 00:54:56.906512 2677 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:54:56.906738 kubelet[2677]: I0913 00:54:56.906732 2677 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:54:56.906794 kubelet[2677]: I0913 00:54:56.906781 2677 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:54:56.906895 kubelet[2677]: I0913 00:54:56.906795 2677 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-d04f0c45dd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:54:56.906952 kubelet[2677]: I0913 00:54:56.906901 2677 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:54:56.906952 kubelet[2677]: I0913 00:54:56.906907 2677 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:54:56.906952 kubelet[2677]: I0913 00:54:56.906924 2677 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:56.907012 kubelet[2677]: I0913 00:54:56.906972 2677 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:54:56.907012 kubelet[2677]: I0913 00:54:56.906979 2677 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:54:56.907012 kubelet[2677]: I0913 00:54:56.906993 2677 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:54:56.907012 kubelet[2677]: I0913 00:54:56.907000 2677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:54:56.907428 kubelet[2677]: I0913 00:54:56.907412 2677 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:54:56.907700 kubelet[2677]: I0913 00:54:56.907692 2677 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:54:56.908295 kubelet[2677]: I0913 00:54:56.908027 2677 server.go:1274] "Started kubelet" Sep 13 00:54:56.908540 kubelet[2677]: I0913 00:54:56.908494 2677 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:54:56.908583 kubelet[2677]: I0913 00:54:56.908517 2677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:54:56.908710 kubelet[2677]: I0913 00:54:56.908697 2677 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:54:56.908000 audit[2677]: AVC avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:56.909690 kubelet[2677]: I0913 00:54:56.909633 2677 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Sep 13 00:54:56.909690 kubelet[2677]: I0913 00:54:56.909659 2677 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Sep 13 00:54:56.909690 kubelet[2677]: I0913 00:54:56.909679 2677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:54:56.909784 kubelet[2677]: I0913 00:54:56.909696 2677 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:54:56.909815 kubelet[2677]: I0913 00:54:56.909799 2677 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:54:56.909846 kubelet[2677]: E0913 00:54:56.909834 2677 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:54:56.909846 kubelet[2677]: E0913 00:54:56.909840 2677 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-d04f0c45dd\" not found" Sep 13 00:54:56.909908 kubelet[2677]: I0913 00:54:56.909859 2677 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:54:56.909945 kubelet[2677]: I0913 00:54:56.909937 2677 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:54:56.909994 kubelet[2677]: I0913 00:54:56.909988 2677 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:54:56.910188 kubelet[2677]: I0913 00:54:56.910175 2677 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:54:56.910251 kubelet[2677]: I0913 00:54:56.910236 2677 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:54:56.910892 kubelet[2677]: I0913 00:54:56.910884 2677 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:54:56.914286 kubelet[2677]: I0913 00:54:56.914265 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:54:56.914838 kubelet[2677]: I0913 00:54:56.914821 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:54:56.914873 kubelet[2677]: I0913 00:54:56.914844 2677 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:54:56.914873 kubelet[2677]: I0913 00:54:56.914864 2677 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:54:56.914945 kubelet[2677]: E0913 00:54:56.914911 2677 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:54:56.931415 kubelet[2677]: I0913 00:54:56.931357 2677 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:54:56.931415 kubelet[2677]: I0913 00:54:56.931394 2677 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:54:56.931518 kubelet[2677]: I0913 00:54:56.931423 2677 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:56.931518 kubelet[2677]: I0913 00:54:56.931510 2677 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:54:56.931554 kubelet[2677]: I0913 00:54:56.931516 2677 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:54:56.931554 kubelet[2677]: I0913 00:54:56.931528 2677 policy_none.go:49] "None policy: Start" Sep 13 00:54:56.931808 kubelet[2677]: I0913 00:54:56.931801 2677 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:54:56.931837 kubelet[2677]: I0913 00:54:56.931811 2677 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:54:56.931883 kubelet[2677]: I0913 00:54:56.931878 2677 state_mem.go:75] "Updated machine memory state" Sep 13 00:54:56.932514 kubelet[2677]: I0913 00:54:56.932483 2677 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:54:56.932514 kubelet[2677]: I0913 00:54:56.932510 2677 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Sep 13 00:54:56.932614 kubelet[2677]: I0913 00:54:56.932584 2677 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:54:56.932638 kubelet[2677]: I0913 00:54:56.932613 2677 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:54:56.932757 kubelet[2677]: I0913 00:54:56.932749 2677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:54:56.908000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:57.006452 kernel: audit: type=1400 audit(1757724896.908:207): avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:57.006487 kernel: audit: type=1401 audit(1757724896.908:207): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:57.006502 kernel: audit: type=1300 audit(1757724896.908:207): arch=c000003e syscall=188 success=no exit=-22 a0=c0002745d0 a1=c000a8c498 a2=c0002745a0 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:56.908000 audit[2677]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0002745d0 a1=c000a8c498 a2=c0002745a0 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:57.021052 kubelet[2677]: W0913 00:54:57.021039 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:57.021117 kubelet[2677]: W0913 00:54:57.021039 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:57.021371 kubelet[2677]: W0913 00:54:57.021331 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:57.101071 kernel: audit: type=1327 audit(1757724896.908:207): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:56.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:57.102199 kubelet[2677]: I0913 00:54:57.102188 2677 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.107421 kubelet[2677]: I0913 00:54:57.107413 2677 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.107457 kubelet[2677]: I0913 00:54:57.107443 2677 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.111260 kubelet[2677]: I0913 00:54:57.111247 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.111302 kubelet[2677]: I0913 00:54:57.111273 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.111302 kubelet[2677]: I0913 00:54:57.111288 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.111429 kubelet[2677]: I0913 00:54:57.111416 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c3eeafda6ff624b366915d6245bd6b3-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-d04f0c45dd\" (UID: \"8c3eeafda6ff624b366915d6245bd6b3\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.111460 kubelet[2677]: I0913 00:54:57.111436 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc9d0822e7f30297f426b31a4471ac84-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-d04f0c45dd\" (UID: \"fc9d0822e7f30297f426b31a4471ac84\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.111460 kubelet[2677]: I0913 00:54:57.111451 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.111524 kubelet[2677]: I0913 00:54:57.111464 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/117c1493503282af7efed6bf6aeb2ab3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" (UID: \"117c1493503282af7efed6bf6aeb2ab3\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.111524 kubelet[2677]: I0913 00:54:57.111476 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc9d0822e7f30297f426b31a4471ac84-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-d04f0c45dd\" (UID: \"fc9d0822e7f30297f426b31a4471ac84\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.111524 kubelet[2677]: I0913 00:54:57.111487 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc9d0822e7f30297f426b31a4471ac84-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-d04f0c45dd\" (UID: \"fc9d0822e7f30297f426b31a4471ac84\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.192601 kernel: audit: type=1400 audit(1757724896.908:208): avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:56.908000 audit[2677]: AVC avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:57.255663 kernel: audit: type=1401 audit(1757724896.908:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:56.908000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:57.287852 kernel: audit: type=1300 audit(1757724896.908:208): arch=c000003e syscall=188 success=no exit=-22 a0=c0003a56a0 a1=c000a8c4b0 a2=c000274660 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:56.908000 audit[2677]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0003a56a0 a1=c000a8c4b0 a2=c000274660 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:57.381564 kernel: audit: type=1327 audit(1757724896.908:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:56.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:56.931000 audit[2677]: AVC avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:54:56.931000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Sep 13 00:54:56.931000 audit[2677]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001719590 a1=c00108ba28 a2=c001719560 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:56.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Sep 13 00:54:57.907285 kubelet[2677]: I0913 00:54:57.907217 2677 apiserver.go:52] "Watching apiserver" Sep 13 00:54:57.910781 kubelet[2677]: I0913 00:54:57.910744 2677 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:54:57.922560 kubelet[2677]: W0913 00:54:57.922521 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:57.922560 kubelet[2677]: E0913 00:54:57.922554 2677 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-d04f0c45dd\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.922948 kubelet[2677]: W0913 00:54:57.922927 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:54:57.922974 kubelet[2677]: E0913 00:54:57.922949 2677 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-d04f0c45dd\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-d04f0c45dd" Sep 13 00:54:57.930940 kubelet[2677]: I0913 00:54:57.930908 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-d04f0c45dd" podStartSLOduration=0.930900031 podStartE2EDuration="930.900031ms" podCreationTimestamp="2025-09-13 00:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:57.930748209 +0000 UTC m=+1.091866347" watchObservedRunningTime="2025-09-13 00:54:57.930900031 +0000 UTC m=+1.092018165" Sep 13 00:54:57.940625 kubelet[2677]: I0913 00:54:57.940556 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-d04f0c45dd" podStartSLOduration=0.940546586 podStartE2EDuration="940.546586ms" podCreationTimestamp="2025-09-13 00:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:57.935692308 +0000 UTC m=+1.096810446" watchObservedRunningTime="2025-09-13 00:54:57.940546586 +0000 UTC m=+1.101664725" Sep 13 00:54:57.940625 kubelet[2677]: I0913 00:54:57.940595 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-d04f0c45dd" podStartSLOduration=0.940591972 podStartE2EDuration="940.591972ms" podCreationTimestamp="2025-09-13 00:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:57.940591313 +0000 UTC m=+1.101709452" watchObservedRunningTime="2025-09-13 00:54:57.940591972 +0000 UTC m=+1.101710106" Sep 13 00:55:02.314055 kubelet[2677]: I0913 00:55:02.313993 2677 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:55:02.314886 env[1672]: time="2025-09-13T00:55:02.314650123Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:55:02.315527 kubelet[2677]: I0913 00:55:02.315016 2677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:55:03.362356 kubelet[2677]: I0913 00:55:03.362287 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6kw7\" (UniqueName: \"kubernetes.io/projected/7e7e14cc-868e-43cf-9b05-ede17a57647e-kube-api-access-r6kw7\") pod \"kube-proxy-mr7kx\" (UID: \"7e7e14cc-868e-43cf-9b05-ede17a57647e\") " pod="kube-system/kube-proxy-mr7kx" Sep 13 00:55:03.363192 kubelet[2677]: I0913 00:55:03.362400 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e7e14cc-868e-43cf-9b05-ede17a57647e-lib-modules\") pod \"kube-proxy-mr7kx\" (UID: \"7e7e14cc-868e-43cf-9b05-ede17a57647e\") " pod="kube-system/kube-proxy-mr7kx" Sep 13 00:55:03.363192 kubelet[2677]: I0913 00:55:03.362451 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e7e14cc-868e-43cf-9b05-ede17a57647e-kube-proxy\") pod \"kube-proxy-mr7kx\" (UID: \"7e7e14cc-868e-43cf-9b05-ede17a57647e\") " pod="kube-system/kube-proxy-mr7kx" Sep 13 00:55:03.363192 kubelet[2677]: I0913 00:55:03.362497 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e7e14cc-868e-43cf-9b05-ede17a57647e-xtables-lock\") pod \"kube-proxy-mr7kx\" (UID: \"7e7e14cc-868e-43cf-9b05-ede17a57647e\") " pod="kube-system/kube-proxy-mr7kx" Sep 13 00:55:03.462883 kubelet[2677]: I0913 00:55:03.462798 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2sc\" (UniqueName: \"kubernetes.io/projected/6ad17f17-e962-41cc-b10d-a15763809989-kube-api-access-bt2sc\") pod \"tigera-operator-58fc44c59b-t8jsg\" (UID: \"6ad17f17-e962-41cc-b10d-a15763809989\") " pod="tigera-operator/tigera-operator-58fc44c59b-t8jsg" Sep 13 00:55:03.463164 kubelet[2677]: I0913 00:55:03.462952 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ad17f17-e962-41cc-b10d-a15763809989-var-lib-calico\") pod \"tigera-operator-58fc44c59b-t8jsg\" (UID: \"6ad17f17-e962-41cc-b10d-a15763809989\") " pod="tigera-operator/tigera-operator-58fc44c59b-t8jsg" Sep 13 00:55:03.474941 kubelet[2677]: I0913 00:55:03.474881 2677 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:55:03.748008 env[1672]: time="2025-09-13T00:55:03.747792337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-t8jsg,Uid:6ad17f17-e962-41cc-b10d-a15763809989,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:55:03.769287 env[1672]: time="2025-09-13T00:55:03.769109899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:03.769287 env[1672]: time="2025-09-13T00:55:03.769215779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:03.769287 env[1672]: time="2025-09-13T00:55:03.769250970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:03.769814 env[1672]: time="2025-09-13T00:55:03.769639525Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eca34f9f7812b5df4c4084f81c0f6b2ee0cdd88b666c54bb42b27fa83108aa32 pid=2768 runtime=io.containerd.runc.v2 Sep 13 00:55:03.777323 env[1672]: time="2025-09-13T00:55:03.777197931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mr7kx,Uid:7e7e14cc-868e-43cf-9b05-ede17a57647e,Namespace:kube-system,Attempt:0,}" Sep 13 00:55:03.800808 env[1672]: time="2025-09-13T00:55:03.800671374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:03.800808 env[1672]: time="2025-09-13T00:55:03.800744480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:03.800808 env[1672]: time="2025-09-13T00:55:03.800771280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:03.801133 env[1672]: time="2025-09-13T00:55:03.801034314Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8297f7e5ca7d866fa84e3eb38398e2934b54db98086d1b0702c9934aa58d6b0 pid=2795 runtime=io.containerd.runc.v2 Sep 13 00:55:03.828838 env[1672]: time="2025-09-13T00:55:03.828804642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mr7kx,Uid:7e7e14cc-868e-43cf-9b05-ede17a57647e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8297f7e5ca7d866fa84e3eb38398e2934b54db98086d1b0702c9934aa58d6b0\"" Sep 13 00:55:03.830489 env[1672]: time="2025-09-13T00:55:03.830464970Z" level=info msg="CreateContainer within sandbox \"f8297f7e5ca7d866fa84e3eb38398e2934b54db98086d1b0702c9934aa58d6b0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:55:03.836443 env[1672]: time="2025-09-13T00:55:03.836380095Z" level=info msg="CreateContainer within sandbox \"f8297f7e5ca7d866fa84e3eb38398e2934b54db98086d1b0702c9934aa58d6b0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"715e9d5f4be7123f7c95b3503a5d3f34db61440dc2cfc7ff13240a39e5a6ef6b\"" Sep 13 00:55:03.836551 env[1672]: time="2025-09-13T00:55:03.836524021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-t8jsg,Uid:6ad17f17-e962-41cc-b10d-a15763809989,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"eca34f9f7812b5df4c4084f81c0f6b2ee0cdd88b666c54bb42b27fa83108aa32\"" Sep 13 00:55:03.836728 env[1672]: time="2025-09-13T00:55:03.836705612Z" level=info msg="StartContainer for \"715e9d5f4be7123f7c95b3503a5d3f34db61440dc2cfc7ff13240a39e5a6ef6b\"" Sep 13 00:55:03.837536 env[1672]: time="2025-09-13T00:55:03.837518022Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:55:03.861234 env[1672]: time="2025-09-13T00:55:03.861206504Z" level=info msg="StartContainer for \"715e9d5f4be7123f7c95b3503a5d3f34db61440dc2cfc7ff13240a39e5a6ef6b\" returns successfully" Sep 13 00:55:03.952618 kubelet[2677]: I0913 00:55:03.952475 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mr7kx" podStartSLOduration=0.952425955 podStartE2EDuration="952.425955ms" podCreationTimestamp="2025-09-13 00:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:55:03.952391106 +0000 UTC m=+7.113509306" watchObservedRunningTime="2025-09-13 00:55:03.952425955 +0000 UTC m=+7.113544184" Sep 13 00:55:04.033000 audit[2920]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.073728 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 00:55:04.073821 kernel: audit: type=1325 audit(1757724904.033:210): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.033000 audit[2920]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeaf8e1cb0 a2=0 a3=7ffeaf8e1c9c items=0 ppid=2858 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.225385 kernel: audit: type=1300 audit(1757724904.033:210): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeaf8e1cb0 a2=0 a3=7ffeaf8e1c9c items=0 ppid=2858 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.225434 kernel: audit: type=1327 audit(1757724904.033:210): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:55:04.033000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:55:04.282884 kernel: audit: type=1325 audit(1757724904.033:211): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.033000 audit[2921]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.033000 audit[2921]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff55354960 a2=0 a3=7fff5535494c items=0 ppid=2858 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.404522 update_engine[1662]: I0913 00:55:04.404464 1662 update_attempter.cc:509] Updating boot flags... Sep 13 00:55:04.435507 kernel: audit: type=1300 audit(1757724904.033:211): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff55354960 a2=0 a3=7fff5535494c items=0 ppid=2858 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.435560 kernel: audit: type=1327 audit(1757724904.033:211): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:55:04.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Sep 13 00:55:04.037000 audit[2923]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.493372 kernel: audit: type=1325 audit(1757724904.037:212): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.037000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffeb9d67d0 a2=0 a3=7fffeb9d67bc items=0 ppid=2858 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.646380 kernel: audit: type=1300 audit(1757724904.037:212): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffeb9d67d0 a2=0 a3=7fffeb9d67bc items=0 ppid=2858 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.646435 kernel: audit: type=1327 audit(1757724904.037:212): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:55:04.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:55:04.037000 audit[2922]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.761471 kernel: audit: type=1325 audit(1757724904.037:213): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.037000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb9700d80 a2=0 a3=eba4db82b47f2975 items=0 ppid=2858 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Sep 13 00:55:04.040000 audit[2925]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.040000 audit[2925]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff59976d70 a2=0 a3=7fff59976d5c items=0 ppid=2858 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.040000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:55:04.041000 audit[2924]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.041000 audit[2924]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec9b7adb0 a2=0 a3=7ffec9b7ad9c items=0 ppid=2858 pid=2924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.041000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Sep 13 00:55:04.136000 audit[2926]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.136000 audit[2926]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe8f2d0260 a2=0 a3=7ffe8f2d024c items=0 ppid=2858 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.136000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:55:04.138000 audit[2928]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.138000 audit[2928]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd634e17f0 a2=0 a3=7ffd634e17dc items=0 ppid=2858 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Sep 13 00:55:04.140000 audit[2931]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.140000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff669071c0 a2=0 a3=7fff669071ac items=0 ppid=2858 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Sep 13 00:55:04.140000 audit[2932]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.140000 audit[2932]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddb3d7d80 a2=0 a3=7ffddb3d7d6c items=0 ppid=2858 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:55:04.142000 audit[2934]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.142000 audit[2934]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc6c47afa0 a2=0 a3=7ffc6c47af8c items=0 ppid=2858 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.142000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:55:04.142000 audit[2935]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.142000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdf746610 a2=0 a3=7fffdf7465fc items=0 ppid=2858 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.142000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:55:04.143000 audit[2937]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.143000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff16128260 a2=0 a3=7fff1612824c items=0 ppid=2858 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:55:04.145000 audit[2940]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.145000 audit[2940]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe4d0edc10 a2=0 a3=7ffe4d0edbfc items=0 ppid=2858 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Sep 13 00:55:04.146000 audit[2941]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.146000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff68bd2de0 a2=0 a3=7fff68bd2dcc items=0 ppid=2858 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:55:04.147000 audit[2943]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.147000 audit[2943]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1c1e2a50 a2=0 a3=7fff1c1e2a3c items=0 ppid=2858 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:55:04.148000 audit[2944]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.148000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe68f22e20 a2=0 a3=7ffe68f22e0c items=0 ppid=2858 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:55:04.761000 audit[2966]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.761000 audit[2966]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff747e5a10 a2=0 a3=7fff747e59fc items=0 ppid=2858 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.761000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:55:04.763000 audit[2969]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.763000 audit[2969]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd1c5a0450 a2=0 a3=7ffd1c5a043c items=0 ppid=2858 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:55:04.765000 audit[2972]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.765000 audit[2972]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd832fa900 a2=0 a3=7ffd832fa8ec items=0 ppid=2858 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:55:04.766000 audit[2973]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.766000 audit[2973]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe838e0310 a2=0 a3=7ffe838e02fc items=0 ppid=2858 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:55:04.767000 audit[2975]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.767000 audit[2975]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffaf0cdb50 a2=0 a3=7fffaf0cdb3c items=0 ppid=2858 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:55:04.769000 audit[2978]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.769000 audit[2978]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5b4745f0 a2=0 a3=7ffc5b4745dc items=0 ppid=2858 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:55:04.769000 audit[2979]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.769000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5f148620 a2=0 a3=7ffe5f14860c items=0 ppid=2858 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:55:04.771000 audit[2981]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Sep 13 00:55:04.771000 audit[2981]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc8112eb70 a2=0 a3=7ffc8112eb5c items=0 ppid=2858 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:55:04.786000 audit[2987]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:04.786000 audit[2987]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe167f7a00 a2=0 a3=7ffe167f79ec items=0 ppid=2858 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:04.832000 audit[2987]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:04.832000 audit[2987]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe167f7a00 a2=0 a3=7ffe167f79ec items=0 ppid=2858 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:04.835000 audit[2992]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.835000 audit[2992]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffce38f3300 a2=0 a3=7ffce38f32ec items=0 ppid=2858 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Sep 13 00:55:04.841000 audit[2994]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.841000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd440fe430 a2=0 a3=7ffd440fe41c items=0 ppid=2858 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Sep 13 00:55:04.850000 audit[2997]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.850000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff470ce990 a2=0 a3=7fff470ce97c items=0 ppid=2858 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Sep 13 00:55:04.853000 audit[2998]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.853000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc39e61af0 a2=0 a3=7ffc39e61adc items=0 ppid=2858 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.853000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Sep 13 00:55:04.858000 audit[3000]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.858000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8758dbf0 a2=0 a3=7fff8758dbdc items=0 ppid=2858 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Sep 13 00:55:04.861000 audit[3001]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.861000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdff5e4bc0 a2=0 a3=7ffdff5e4bac items=0 ppid=2858 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Sep 13 00:55:04.868000 audit[3003]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.868000 audit[3003]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffff1bd1d10 a2=0 a3=7ffff1bd1cfc items=0 ppid=2858 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Sep 13 00:55:04.876000 audit[3006]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.876000 audit[3006]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe01bb9830 a2=0 a3=7ffe01bb981c items=0 ppid=2858 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Sep 13 00:55:04.879000 audit[3007]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.879000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc06d14b50 a2=0 a3=7ffc06d14b3c items=0 ppid=2858 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Sep 13 00:55:04.885000 audit[3009]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.885000 audit[3009]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefe1baf10 a2=0 a3=7ffefe1baefc items=0 ppid=2858 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Sep 13 00:55:04.888000 audit[3010]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.888000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec8e03ae0 a2=0 a3=7ffec8e03acc items=0 ppid=2858 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Sep 13 00:55:04.894000 audit[3012]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.894000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe221ec440 a2=0 a3=7ffe221ec42c items=0 ppid=2858 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Sep 13 00:55:04.904000 audit[3015]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.904000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd28a5ffd0 a2=0 a3=7ffd28a5ffbc items=0 ppid=2858 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Sep 13 00:55:04.913000 audit[3018]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.913000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1a182ff0 a2=0 a3=7ffe1a182fdc items=0 ppid=2858 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Sep 13 00:55:04.916000 audit[3019]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.916000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff480e1e20 a2=0 a3=7fff480e1e0c items=0 ppid=2858 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Sep 13 00:55:04.922000 audit[3021]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.922000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffcd17a190 a2=0 a3=7fffcd17a17c items=0 ppid=2858 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:55:04.930000 audit[3024]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.930000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffed27c9650 a2=0 a3=7ffed27c963c items=0 ppid=2858 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Sep 13 00:55:04.933000 audit[3025]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.933000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff790873f0 a2=0 a3=7fff790873dc items=0 ppid=2858 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Sep 13 00:55:04.939000 audit[3027]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.939000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffeda627f40 a2=0 a3=7ffeda627f2c items=0 ppid=2858 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Sep 13 00:55:04.942000 audit[3028]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.942000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff79423fe0 a2=0 a3=7fff79423fcc items=0 ppid=2858 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Sep 13 00:55:04.948000 audit[3030]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.948000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffff7a9df30 a2=0 a3=7ffff7a9df1c items=0 ppid=2858 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:55:04.957000 audit[3033]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Sep 13 00:55:04.957000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd0483f3e0 a2=0 a3=7ffd0483f3cc items=0 ppid=2858 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Sep 13 00:55:04.964000 audit[3035]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:55:04.964000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffded85110 a2=0 a3=7fffded850fc items=0 ppid=2858 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.964000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:04.965000 audit[3035]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Sep 13 00:55:04.965000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffded85110 a2=0 a3=7fffded850fc items=0 ppid=2858 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:04.965000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:05.420182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176820208.mount: Deactivated successfully. Sep 13 00:55:06.007977 env[1672]: time="2025-09-13T00:55:06.007915940Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:06.008553 env[1672]: time="2025-09-13T00:55:06.008538795Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:06.009185 env[1672]: time="2025-09-13T00:55:06.009171851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:06.009853 env[1672]: time="2025-09-13T00:55:06.009841017Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:06.010518 env[1672]: time="2025-09-13T00:55:06.010503640Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:55:06.011587 env[1672]: time="2025-09-13T00:55:06.011573387Z" level=info msg="CreateContainer within sandbox \"eca34f9f7812b5df4c4084f81c0f6b2ee0cdd88b666c54bb42b27fa83108aa32\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:55:06.015719 env[1672]: time="2025-09-13T00:55:06.015704146Z" level=info msg="CreateContainer within sandbox \"eca34f9f7812b5df4c4084f81c0f6b2ee0cdd88b666c54bb42b27fa83108aa32\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5f79cc1d3788047c7ea369cded22e3910f3d068f076e1461b8698f13928c703a\"" Sep 13 00:55:06.015955 env[1672]: time="2025-09-13T00:55:06.015945718Z" level=info msg="StartContainer for \"5f79cc1d3788047c7ea369cded22e3910f3d068f076e1461b8698f13928c703a\"" Sep 13 00:55:06.053911 env[1672]: time="2025-09-13T00:55:06.053885252Z" level=info msg="StartContainer for \"5f79cc1d3788047c7ea369cded22e3910f3d068f076e1461b8698f13928c703a\" returns successfully" Sep 13 00:55:08.240529 kubelet[2677]: I0913 00:55:08.238122 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-t8jsg" podStartSLOduration=3.064161504 podStartE2EDuration="5.238068293s" podCreationTimestamp="2025-09-13 00:55:03 +0000 UTC" firstStartedPulling="2025-09-13 00:55:03.837077814 +0000 UTC m=+6.998195952" lastFinishedPulling="2025-09-13 00:55:06.010984606 +0000 UTC m=+9.172102741" observedRunningTime="2025-09-13 00:55:06.955603832 +0000 UTC m=+10.116721972" watchObservedRunningTime="2025-09-13 00:55:08.238068293 +0000 UTC m=+11.399186514" Sep 13 00:55:10.598400 sudo[1930]: pam_unix(sudo:session): session closed for user root Sep 13 00:55:10.597000 audit[1930]: USER_END pid=1930 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:55:10.599450 sshd[1925]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:10.600999 systemd[1]: sshd@8-147.75.203.133:22-139.178.89.65:50220.service: Deactivated successfully. Sep 13 00:55:10.601688 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:55:10.601698 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:55:10.602187 systemd-logind[1709]: Removed session 11. Sep 13 00:55:10.624724 kernel: kauditd_printk_skb: 143 callbacks suppressed Sep 13 00:55:10.624795 kernel: audit: type=1106 audit(1757724910.597:261): pid=1930 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:55:10.597000 audit[1930]: CRED_DISP pid=1930 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:55:10.798326 kernel: audit: type=1104 audit(1757724910.597:262): pid=1930 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Sep 13 00:55:10.798443 kernel: audit: type=1106 audit(1757724910.598:263): pid=1925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:55:10.598000 audit[1925]: USER_END pid=1925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:55:10.598000 audit[1925]: CRED_DISP pid=1925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:55:10.981525 kernel: audit: type=1104 audit(1757724910.598:264): pid=1925 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 00:55:10.981631 kernel: audit: type=1131 audit(1757724910.599:265): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-147.75.203.133:22-139.178.89.65:50220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:10.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-147.75.203.133:22-139.178.89.65:50220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:10.950000 audit[3202]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.128601 kernel: audit: type=1325 audit(1757724910.950:266): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.128692 kernel: audit: type=1300 audit(1757724910.950:266): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe9a947b40 a2=0 a3=7ffe9a947b2c items=0 ppid=2858 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:10.950000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe9a947b40 a2=0 a3=7ffe9a947b2c items=0 ppid=2858 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:10.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:11.284453 kernel: audit: type=1327 audit(1757724910.950:266): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:11.288000 audit[3202]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.288000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe9a947b40 a2=0 a3=0 items=0 ppid=2858 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:11.446651 kernel: audit: type=1325 audit(1757724911.288:267): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.446743 kernel: audit: type=1300 audit(1757724911.288:267): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe9a947b40 a2=0 a3=0 items=0 ppid=2858 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:11.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:11.449000 audit[3205]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.449000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd48a57040 a2=0 a3=7ffd48a5702c items=0 ppid=2858 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:11.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:11.466000 audit[3205]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:11.466000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd48a57040 a2=0 a3=0 items=0 ppid=2858 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:11.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:12.694000 audit[3207]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:12.694000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd6ffd2de0 a2=0 a3=7ffd6ffd2dcc items=0 ppid=2858 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:12.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:12.708000 audit[3207]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:12.708000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6ffd2de0 a2=0 a3=0 items=0 ppid=2858 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:12.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:12.961458 kubelet[2677]: I0913 00:55:12.961258 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twb27\" (UniqueName: \"kubernetes.io/projected/b245007a-3026-4155-9d57-2ae2769d8a56-kube-api-access-twb27\") pod \"calico-typha-f5bf5c4c9-np8cq\" (UID: \"b245007a-3026-4155-9d57-2ae2769d8a56\") " pod="calico-system/calico-typha-f5bf5c4c9-np8cq" Sep 13 00:55:12.961458 kubelet[2677]: I0913 00:55:12.961385 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b245007a-3026-4155-9d57-2ae2769d8a56-tigera-ca-bundle\") pod \"calico-typha-f5bf5c4c9-np8cq\" (UID: \"b245007a-3026-4155-9d57-2ae2769d8a56\") " pod="calico-system/calico-typha-f5bf5c4c9-np8cq" Sep 13 00:55:12.961458 kubelet[2677]: I0913 00:55:12.961440 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b245007a-3026-4155-9d57-2ae2769d8a56-typha-certs\") pod \"calico-typha-f5bf5c4c9-np8cq\" (UID: \"b245007a-3026-4155-9d57-2ae2769d8a56\") " pod="calico-system/calico-typha-f5bf5c4c9-np8cq" Sep 13 00:55:13.148571 env[1672]: time="2025-09-13T00:55:13.148434409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f5bf5c4c9-np8cq,Uid:b245007a-3026-4155-9d57-2ae2769d8a56,Namespace:calico-system,Attempt:0,}" Sep 13 00:55:13.171260 env[1672]: time="2025-09-13T00:55:13.171123791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:13.171260 env[1672]: time="2025-09-13T00:55:13.171213354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:13.171260 env[1672]: time="2025-09-13T00:55:13.171250279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:13.171781 env[1672]: time="2025-09-13T00:55:13.171699188Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/623e304dae3f09020867a4d730bd8e514fb1a235a8628a8e3a3c073aed939960 pid=3217 runtime=io.containerd.runc.v2 Sep 13 00:55:13.240745 env[1672]: time="2025-09-13T00:55:13.240678891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f5bf5c4c9-np8cq,Uid:b245007a-3026-4155-9d57-2ae2769d8a56,Namespace:calico-system,Attempt:0,} returns sandbox id \"623e304dae3f09020867a4d730bd8e514fb1a235a8628a8e3a3c073aed939960\"" Sep 13 00:55:13.241477 env[1672]: time="2025-09-13T00:55:13.241464861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:55:13.364885 kubelet[2677]: I0913 00:55:13.364806 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-cni-bin-dir\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365193 kubelet[2677]: I0913 00:55:13.364900 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-cni-log-dir\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365193 kubelet[2677]: I0913 00:55:13.365007 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-node-certs\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365193 kubelet[2677]: I0913 00:55:13.365090 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-policysync\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365193 kubelet[2677]: I0913 00:55:13.365147 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-var-lib-calico\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365193 kubelet[2677]: I0913 00:55:13.365189 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-var-run-calico\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365804 kubelet[2677]: I0913 00:55:13.365299 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-flexvol-driver-host\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365804 kubelet[2677]: I0913 00:55:13.365401 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cwsw\" (UniqueName: \"kubernetes.io/projected/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-kube-api-access-9cwsw\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365804 kubelet[2677]: I0913 00:55:13.365451 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-xtables-lock\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365804 kubelet[2677]: I0913 00:55:13.365501 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-cni-net-dir\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.365804 kubelet[2677]: I0913 00:55:13.365565 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-lib-modules\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.366280 kubelet[2677]: I0913 00:55:13.365618 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b87e55dc-d0b7-42e6-9eaf-11c5241cd917-tigera-ca-bundle\") pod \"calico-node-5fz8w\" (UID: \"b87e55dc-d0b7-42e6-9eaf-11c5241cd917\") " pod="calico-system/calico-node-5fz8w" Sep 13 00:55:13.468880 kubelet[2677]: E0913 00:55:13.468823 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.468880 kubelet[2677]: W0913 00:55:13.468870 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.469237 kubelet[2677]: E0913 00:55:13.468911 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.474919 kubelet[2677]: E0913 00:55:13.474832 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.474919 kubelet[2677]: W0913 00:55:13.474871 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.474919 kubelet[2677]: E0913 00:55:13.474913 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.488089 kubelet[2677]: E0913 00:55:13.488036 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.488089 kubelet[2677]: W0913 00:55:13.488085 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.488623 kubelet[2677]: E0913 00:55:13.488143 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.495838 kubelet[2677]: E0913 00:55:13.495620 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzrs8" podUID="76f0a7cf-aca7-4535-904d-665ae5104c51" Sep 13 00:55:13.513467 env[1672]: time="2025-09-13T00:55:13.513344896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fz8w,Uid:b87e55dc-d0b7-42e6-9eaf-11c5241cd917,Namespace:calico-system,Attempt:0,}" Sep 13 00:55:13.536716 env[1672]: time="2025-09-13T00:55:13.536539357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:13.536716 env[1672]: time="2025-09-13T00:55:13.536663663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:13.536716 env[1672]: time="2025-09-13T00:55:13.536702716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:13.537184 env[1672]: time="2025-09-13T00:55:13.537053738Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97775063a99d0a48f5cc91f2331b25e6d7c9a952da7b4e6e8161d29b5c655883 pid=3274 runtime=io.containerd.runc.v2 Sep 13 00:55:13.565995 kubelet[2677]: E0913 00:55:13.565951 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.565995 kubelet[2677]: W0913 00:55:13.565987 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.566296 kubelet[2677]: E0913 00:55:13.566023 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.566488 kubelet[2677]: E0913 00:55:13.566432 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.566488 kubelet[2677]: W0913 00:55:13.566450 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.566488 kubelet[2677]: E0913 00:55:13.566469 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.566896 kubelet[2677]: E0913 00:55:13.566836 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.566896 kubelet[2677]: W0913 00:55:13.566860 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.566896 kubelet[2677]: E0913 00:55:13.566883 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.567272 kubelet[2677]: E0913 00:55:13.567249 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.567272 kubelet[2677]: W0913 00:55:13.567268 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.567480 kubelet[2677]: E0913 00:55:13.567290 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.567765 kubelet[2677]: E0913 00:55:13.567704 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.567765 kubelet[2677]: W0913 00:55:13.567728 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.567765 kubelet[2677]: E0913 00:55:13.567750 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.568138 kubelet[2677]: E0913 00:55:13.568095 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.568138 kubelet[2677]: W0913 00:55:13.568119 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.568302 kubelet[2677]: E0913 00:55:13.568142 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.568512 kubelet[2677]: E0913 00:55:13.568494 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.568613 kubelet[2677]: W0913 00:55:13.568516 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.568613 kubelet[2677]: E0913 00:55:13.568542 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.568965 kubelet[2677]: E0913 00:55:13.568941 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.569105 kubelet[2677]: W0913 00:55:13.568966 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.569105 kubelet[2677]: E0913 00:55:13.568994 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.569403 kubelet[2677]: E0913 00:55:13.569387 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.569466 kubelet[2677]: W0913 00:55:13.569403 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.569466 kubelet[2677]: E0913 00:55:13.569419 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.569657 kubelet[2677]: E0913 00:55:13.569643 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.569717 kubelet[2677]: W0913 00:55:13.569661 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.569717 kubelet[2677]: E0913 00:55:13.569681 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.569977 kubelet[2677]: E0913 00:55:13.569962 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.570037 kubelet[2677]: W0913 00:55:13.569980 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.570037 kubelet[2677]: E0913 00:55:13.570000 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.570262 kubelet[2677]: E0913 00:55:13.570247 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.570317 kubelet[2677]: W0913 00:55:13.570264 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.570317 kubelet[2677]: E0913 00:55:13.570282 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.570568 kubelet[2677]: E0913 00:55:13.570555 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.570629 kubelet[2677]: W0913 00:55:13.570572 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.570629 kubelet[2677]: E0913 00:55:13.570592 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.570870 kubelet[2677]: E0913 00:55:13.570842 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.570870 kubelet[2677]: W0913 00:55:13.570863 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.571070 kubelet[2677]: E0913 00:55:13.570882 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.571143 kubelet[2677]: E0913 00:55:13.571126 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.571213 kubelet[2677]: W0913 00:55:13.571141 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.571213 kubelet[2677]: E0913 00:55:13.571159 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.571419 kubelet[2677]: E0913 00:55:13.571396 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.571419 kubelet[2677]: W0913 00:55:13.571417 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.571572 kubelet[2677]: E0913 00:55:13.571437 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.571724 kubelet[2677]: E0913 00:55:13.571707 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.571724 kubelet[2677]: W0913 00:55:13.571723 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.571858 kubelet[2677]: E0913 00:55:13.571737 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.572014 kubelet[2677]: E0913 00:55:13.571994 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.572081 kubelet[2677]: W0913 00:55:13.572014 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.572081 kubelet[2677]: E0913 00:55:13.572031 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.572263 kubelet[2677]: E0913 00:55:13.572249 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.572336 kubelet[2677]: W0913 00:55:13.572264 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.572336 kubelet[2677]: E0913 00:55:13.572280 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.572540 kubelet[2677]: E0913 00:55:13.572522 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.572605 kubelet[2677]: W0913 00:55:13.572543 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.572605 kubelet[2677]: E0913 00:55:13.572567 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.573005 kubelet[2677]: E0913 00:55:13.572989 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.573071 kubelet[2677]: W0913 00:55:13.573009 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.573071 kubelet[2677]: E0913 00:55:13.573031 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.573186 kubelet[2677]: I0913 00:55:13.573074 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76f0a7cf-aca7-4535-904d-665ae5104c51-kubelet-dir\") pod \"csi-node-driver-rzrs8\" (UID: \"76f0a7cf-aca7-4535-904d-665ae5104c51\") " pod="calico-system/csi-node-driver-rzrs8" Sep 13 00:55:13.573348 kubelet[2677]: E0913 00:55:13.573330 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.573431 kubelet[2677]: W0913 00:55:13.573352 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.573431 kubelet[2677]: E0913 00:55:13.573385 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.573543 kubelet[2677]: I0913 00:55:13.573422 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/76f0a7cf-aca7-4535-904d-665ae5104c51-varrun\") pod \"csi-node-driver-rzrs8\" (UID: \"76f0a7cf-aca7-4535-904d-665ae5104c51\") " pod="calico-system/csi-node-driver-rzrs8" Sep 13 00:55:13.573738 kubelet[2677]: E0913 00:55:13.573719 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.573799 kubelet[2677]: W0913 00:55:13.573741 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.573799 kubelet[2677]: E0913 00:55:13.573766 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.574025 kubelet[2677]: E0913 00:55:13.574010 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.574097 kubelet[2677]: W0913 00:55:13.574025 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.574097 kubelet[2677]: E0913 00:55:13.574046 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.574269 kubelet[2677]: E0913 00:55:13.574259 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.574318 kubelet[2677]: W0913 00:55:13.574269 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.574318 kubelet[2677]: E0913 00:55:13.574284 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.574318 kubelet[2677]: I0913 00:55:13.574312 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/76f0a7cf-aca7-4535-904d-665ae5104c51-socket-dir\") pod \"csi-node-driver-rzrs8\" (UID: \"76f0a7cf-aca7-4535-904d-665ae5104c51\") " pod="calico-system/csi-node-driver-rzrs8" Sep 13 00:55:13.574534 kubelet[2677]: E0913 00:55:13.574523 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.574534 kubelet[2677]: W0913 00:55:13.574534 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.574616 kubelet[2677]: E0913 00:55:13.574549 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.574616 kubelet[2677]: I0913 00:55:13.574568 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/76f0a7cf-aca7-4535-904d-665ae5104c51-registration-dir\") pod \"csi-node-driver-rzrs8\" (UID: \"76f0a7cf-aca7-4535-904d-665ae5104c51\") " pod="calico-system/csi-node-driver-rzrs8" Sep 13 00:55:13.574786 kubelet[2677]: E0913 00:55:13.574770 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.574831 kubelet[2677]: W0913 00:55:13.574787 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.574831 kubelet[2677]: E0913 00:55:13.574807 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.575011 kubelet[2677]: E0913 00:55:13.575001 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.575055 kubelet[2677]: W0913 00:55:13.575012 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.575055 kubelet[2677]: E0913 00:55:13.575027 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.575203 kubelet[2677]: E0913 00:55:13.575193 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.575252 kubelet[2677]: W0913 00:55:13.575203 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.575252 kubelet[2677]: E0913 00:55:13.575216 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.575391 kubelet[2677]: E0913 00:55:13.575381 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.575391 kubelet[2677]: W0913 00:55:13.575391 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.575481 kubelet[2677]: E0913 00:55:13.575403 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.575638 kubelet[2677]: E0913 00:55:13.575622 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.575690 kubelet[2677]: W0913 00:55:13.575642 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.575690 kubelet[2677]: E0913 00:55:13.575667 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.575771 kubelet[2677]: I0913 00:55:13.575704 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb997\" (UniqueName: \"kubernetes.io/projected/76f0a7cf-aca7-4535-904d-665ae5104c51-kube-api-access-jb997\") pod \"csi-node-driver-rzrs8\" (UID: \"76f0a7cf-aca7-4535-904d-665ae5104c51\") " pod="calico-system/csi-node-driver-rzrs8" Sep 13 00:55:13.575924 kubelet[2677]: E0913 00:55:13.575913 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.575969 kubelet[2677]: W0913 00:55:13.575924 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.575969 kubelet[2677]: E0913 00:55:13.575937 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.576089 kubelet[2677]: E0913 00:55:13.576080 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.576131 kubelet[2677]: W0913 00:55:13.576090 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.576131 kubelet[2677]: E0913 00:55:13.576102 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.576265 kubelet[2677]: E0913 00:55:13.576255 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.576317 kubelet[2677]: W0913 00:55:13.576265 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.576317 kubelet[2677]: E0913 00:55:13.576275 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.576435 kubelet[2677]: E0913 00:55:13.576425 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.576435 kubelet[2677]: W0913 00:55:13.576434 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.576541 kubelet[2677]: E0913 00:55:13.576444 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.585785 env[1672]: time="2025-09-13T00:55:13.585740907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fz8w,Uid:b87e55dc-d0b7-42e6-9eaf-11c5241cd917,Namespace:calico-system,Attempt:0,} returns sandbox id \"97775063a99d0a48f5cc91f2331b25e6d7c9a952da7b4e6e8161d29b5c655883\"" Sep 13 00:55:13.677328 kubelet[2677]: E0913 00:55:13.677231 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.677328 kubelet[2677]: W0913 00:55:13.677274 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.677328 kubelet[2677]: E0913 00:55:13.677315 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.678020 kubelet[2677]: E0913 00:55:13.677941 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.678020 kubelet[2677]: W0913 00:55:13.677975 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.678020 kubelet[2677]: E0913 00:55:13.678013 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.678640 kubelet[2677]: E0913 00:55:13.678559 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.678640 kubelet[2677]: W0913 00:55:13.678600 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.678954 kubelet[2677]: E0913 00:55:13.678642 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.679275 kubelet[2677]: E0913 00:55:13.679215 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.679275 kubelet[2677]: W0913 00:55:13.679250 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.679643 kubelet[2677]: E0913 00:55:13.679296 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.679895 kubelet[2677]: E0913 00:55:13.679814 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.679895 kubelet[2677]: W0913 00:55:13.679849 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.680190 kubelet[2677]: E0913 00:55:13.679914 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.680387 kubelet[2677]: E0913 00:55:13.680335 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.680496 kubelet[2677]: W0913 00:55:13.680385 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.680628 kubelet[2677]: E0913 00:55:13.680491 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.680992 kubelet[2677]: E0913 00:55:13.680913 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.680992 kubelet[2677]: W0913 00:55:13.680948 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.681263 kubelet[2677]: E0913 00:55:13.681037 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.681522 kubelet[2677]: E0913 00:55:13.681470 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.681522 kubelet[2677]: W0913 00:55:13.681495 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.681760 kubelet[2677]: E0913 00:55:13.681528 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.682032 kubelet[2677]: E0913 00:55:13.681982 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.682032 kubelet[2677]: W0913 00:55:13.682009 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.682251 kubelet[2677]: E0913 00:55:13.682043 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.682637 kubelet[2677]: E0913 00:55:13.682605 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.682749 kubelet[2677]: W0913 00:55:13.682641 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.682749 kubelet[2677]: E0913 00:55:13.682684 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.683307 kubelet[2677]: E0913 00:55:13.683246 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.683307 kubelet[2677]: W0913 00:55:13.683287 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.683582 kubelet[2677]: E0913 00:55:13.683331 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.683843 kubelet[2677]: E0913 00:55:13.683790 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.683843 kubelet[2677]: W0913 00:55:13.683818 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.684047 kubelet[2677]: E0913 00:55:13.683926 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.684241 kubelet[2677]: E0913 00:55:13.684216 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.684373 kubelet[2677]: W0913 00:55:13.684242 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.684373 kubelet[2677]: E0913 00:55:13.684324 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.684770 kubelet[2677]: E0913 00:55:13.684717 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.684770 kubelet[2677]: W0913 00:55:13.684743 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.684980 kubelet[2677]: E0913 00:55:13.684855 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.685219 kubelet[2677]: E0913 00:55:13.685188 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.685333 kubelet[2677]: W0913 00:55:13.685223 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.685333 kubelet[2677]: E0913 00:55:13.685298 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.685880 kubelet[2677]: E0913 00:55:13.685826 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.685880 kubelet[2677]: W0913 00:55:13.685856 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.686107 kubelet[2677]: E0913 00:55:13.685974 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.686316 kubelet[2677]: E0913 00:55:13.686287 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.686459 kubelet[2677]: W0913 00:55:13.686321 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.686459 kubelet[2677]: E0913 00:55:13.686416 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.686932 kubelet[2677]: E0913 00:55:13.686864 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.686932 kubelet[2677]: W0913 00:55:13.686898 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.687224 kubelet[2677]: E0913 00:55:13.686971 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.687473 kubelet[2677]: E0913 00:55:13.687413 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.687473 kubelet[2677]: W0913 00:55:13.687443 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.687804 kubelet[2677]: E0913 00:55:13.687503 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.687902 kubelet[2677]: E0913 00:55:13.687833 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.687902 kubelet[2677]: W0913 00:55:13.687858 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.688088 kubelet[2677]: E0913 00:55:13.687918 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.688316 kubelet[2677]: E0913 00:55:13.688286 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.688453 kubelet[2677]: W0913 00:55:13.688320 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.688453 kubelet[2677]: E0913 00:55:13.688399 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.688931 kubelet[2677]: E0913 00:55:13.688879 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.688931 kubelet[2677]: W0913 00:55:13.688904 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.689145 kubelet[2677]: E0913 00:55:13.688939 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.689444 kubelet[2677]: E0913 00:55:13.689395 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.689444 kubelet[2677]: W0913 00:55:13.689420 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.689672 kubelet[2677]: E0913 00:55:13.689452 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.689884 kubelet[2677]: E0913 00:55:13.689859 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.689985 kubelet[2677]: W0913 00:55:13.689885 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.689985 kubelet[2677]: E0913 00:55:13.689910 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.690562 kubelet[2677]: E0913 00:55:13.690532 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.690562 kubelet[2677]: W0913 00:55:13.690560 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.690795 kubelet[2677]: E0913 00:55:13.690586 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.705238 kubelet[2677]: E0913 00:55:13.705155 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:13.705238 kubelet[2677]: W0913 00:55:13.705190 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:13.705238 kubelet[2677]: E0913 00:55:13.705223 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:13.729000 audit[3370]: NETFILTER_CFG table=filter:95 family=2 entries=20 op=nft_register_rule pid=3370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:13.729000 audit[3370]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd0cecf8e0 a2=0 a3=7ffd0cecf8cc items=0 ppid=2858 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:13.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:13.742000 audit[3370]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:13.742000 audit[3370]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd0cecf8e0 a2=0 a3=0 items=0 ppid=2858 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:13.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:14.916020 kubelet[2677]: E0913 00:55:14.915933 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzrs8" podUID="76f0a7cf-aca7-4535-904d-665ae5104c51" Sep 13 00:55:15.223727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155530373.mount: Deactivated successfully. Sep 13 00:55:16.084610 env[1672]: time="2025-09-13T00:55:16.084558449Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:16.085191 env[1672]: time="2025-09-13T00:55:16.085145751Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:16.085729 env[1672]: time="2025-09-13T00:55:16.085685596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:16.086317 env[1672]: time="2025-09-13T00:55:16.086275769Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:16.086609 env[1672]: time="2025-09-13T00:55:16.086565769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:55:16.087111 env[1672]: time="2025-09-13T00:55:16.087070609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:55:16.090634 env[1672]: time="2025-09-13T00:55:16.090617697Z" level=info msg="CreateContainer within sandbox \"623e304dae3f09020867a4d730bd8e514fb1a235a8628a8e3a3c073aed939960\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:55:16.094288 env[1672]: time="2025-09-13T00:55:16.094246058Z" level=info msg="CreateContainer within sandbox \"623e304dae3f09020867a4d730bd8e514fb1a235a8628a8e3a3c073aed939960\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f03356b23d6f3a858768e593a0f2558d57c1ee1e030c70f2f568dfceef4c38ba\"" Sep 13 00:55:16.094442 env[1672]: time="2025-09-13T00:55:16.094427079Z" level=info msg="StartContainer for \"f03356b23d6f3a858768e593a0f2558d57c1ee1e030c70f2f568dfceef4c38ba\"" Sep 13 00:55:16.135435 env[1672]: time="2025-09-13T00:55:16.135375540Z" level=info msg="StartContainer for \"f03356b23d6f3a858768e593a0f2558d57c1ee1e030c70f2f568dfceef4c38ba\" returns successfully" Sep 13 00:55:16.917020 kubelet[2677]: E0913 00:55:16.916953 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzrs8" podUID="76f0a7cf-aca7-4535-904d-665ae5104c51" Sep 13 00:55:16.988886 kubelet[2677]: I0913 00:55:16.988773 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f5bf5c4c9-np8cq" podStartSLOduration=2.14301055 podStartE2EDuration="4.988725146s" podCreationTimestamp="2025-09-13 00:55:12 +0000 UTC" firstStartedPulling="2025-09-13 00:55:13.241313909 +0000 UTC m=+16.402432043" lastFinishedPulling="2025-09-13 00:55:16.087028506 +0000 UTC m=+19.248146639" observedRunningTime="2025-09-13 00:55:16.988351771 +0000 UTC m=+20.149469990" watchObservedRunningTime="2025-09-13 00:55:16.988725146 +0000 UTC m=+20.149843326" Sep 13 00:55:17.003029 kubelet[2677]: E0913 00:55:17.002934 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.003029 kubelet[2677]: W0913 00:55:17.002979 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.003029 kubelet[2677]: E0913 00:55:17.003019 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.003729 kubelet[2677]: E0913 00:55:17.003640 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.003729 kubelet[2677]: W0913 00:55:17.003675 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.003729 kubelet[2677]: E0913 00:55:17.003709 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.004293 kubelet[2677]: E0913 00:55:17.004239 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.004293 kubelet[2677]: W0913 00:55:17.004266 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.004293 kubelet[2677]: E0913 00:55:17.004295 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.004834 kubelet[2677]: E0913 00:55:17.004764 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.004834 kubelet[2677]: W0913 00:55:17.004798 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.004834 kubelet[2677]: E0913 00:55:17.004831 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.005318 kubelet[2677]: E0913 00:55:17.005289 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.005318 kubelet[2677]: W0913 00:55:17.005316 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.005626 kubelet[2677]: E0913 00:55:17.005344 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.005783 kubelet[2677]: E0913 00:55:17.005753 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.005783 kubelet[2677]: W0913 00:55:17.005778 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.006049 kubelet[2677]: E0913 00:55:17.005804 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.006183 kubelet[2677]: E0913 00:55:17.006141 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.006183 kubelet[2677]: W0913 00:55:17.006162 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.006183 kubelet[2677]: E0913 00:55:17.006184 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.006569 kubelet[2677]: E0913 00:55:17.006541 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.006569 kubelet[2677]: W0913 00:55:17.006562 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.006802 kubelet[2677]: E0913 00:55:17.006583 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.007026 kubelet[2677]: E0913 00:55:17.007000 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.007026 kubelet[2677]: W0913 00:55:17.007021 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.007270 kubelet[2677]: E0913 00:55:17.007043 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.007420 kubelet[2677]: E0913 00:55:17.007378 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.007420 kubelet[2677]: W0913 00:55:17.007398 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.007420 kubelet[2677]: E0913 00:55:17.007419 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.007812 kubelet[2677]: E0913 00:55:17.007784 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.007812 kubelet[2677]: W0913 00:55:17.007805 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.008086 kubelet[2677]: E0913 00:55:17.007828 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.008227 kubelet[2677]: E0913 00:55:17.008163 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.008227 kubelet[2677]: W0913 00:55:17.008182 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.008227 kubelet[2677]: E0913 00:55:17.008206 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.008628 kubelet[2677]: E0913 00:55:17.008595 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.008628 kubelet[2677]: W0913 00:55:17.008615 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.008822 kubelet[2677]: E0913 00:55:17.008638 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.009048 kubelet[2677]: E0913 00:55:17.009018 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.009048 kubelet[2677]: W0913 00:55:17.009043 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.009283 kubelet[2677]: E0913 00:55:17.009067 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.009444 kubelet[2677]: E0913 00:55:17.009406 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.009444 kubelet[2677]: W0913 00:55:17.009427 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.009699 kubelet[2677]: E0913 00:55:17.009449 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.027164 kubelet[2677]: E0913 00:55:17.027086 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.027164 kubelet[2677]: W0913 00:55:17.027122 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.027164 kubelet[2677]: E0913 00:55:17.027157 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.027817 kubelet[2677]: E0913 00:55:17.027728 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.027817 kubelet[2677]: W0913 00:55:17.027766 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.027817 kubelet[2677]: E0913 00:55:17.027805 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.028413 kubelet[2677]: E0913 00:55:17.028349 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.028558 kubelet[2677]: W0913 00:55:17.028412 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.028558 kubelet[2677]: E0913 00:55:17.028463 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.029058 kubelet[2677]: E0913 00:55:17.028979 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.029058 kubelet[2677]: W0913 00:55:17.029014 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.029058 kubelet[2677]: E0913 00:55:17.029054 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.029672 kubelet[2677]: E0913 00:55:17.029594 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.029672 kubelet[2677]: W0913 00:55:17.029628 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.029947 kubelet[2677]: E0913 00:55:17.029744 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.030186 kubelet[2677]: E0913 00:55:17.030117 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.030186 kubelet[2677]: W0913 00:55:17.030151 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.030448 kubelet[2677]: E0913 00:55:17.030256 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.030673 kubelet[2677]: E0913 00:55:17.030610 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.030673 kubelet[2677]: W0913 00:55:17.030640 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.030875 kubelet[2677]: E0913 00:55:17.030742 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.031201 kubelet[2677]: E0913 00:55:17.031157 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.031201 kubelet[2677]: W0913 00:55:17.031183 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.031452 kubelet[2677]: E0913 00:55:17.031217 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.031954 kubelet[2677]: E0913 00:55:17.031895 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.031954 kubelet[2677]: W0913 00:55:17.031931 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.032163 kubelet[2677]: E0913 00:55:17.031970 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.032447 kubelet[2677]: E0913 00:55:17.032392 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.032447 kubelet[2677]: W0913 00:55:17.032420 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.032679 kubelet[2677]: E0913 00:55:17.032506 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.032937 kubelet[2677]: E0913 00:55:17.032869 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.032937 kubelet[2677]: W0913 00:55:17.032904 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.033157 kubelet[2677]: E0913 00:55:17.033013 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.033311 kubelet[2677]: E0913 00:55:17.033289 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.033432 kubelet[2677]: W0913 00:55:17.033314 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.033554 kubelet[2677]: E0913 00:55:17.033431 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.033826 kubelet[2677]: E0913 00:55:17.033801 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.033926 kubelet[2677]: W0913 00:55:17.033827 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.033926 kubelet[2677]: E0913 00:55:17.033859 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.034294 kubelet[2677]: E0913 00:55:17.034272 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.034418 kubelet[2677]: W0913 00:55:17.034296 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.034418 kubelet[2677]: E0913 00:55:17.034325 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.035046 kubelet[2677]: E0913 00:55:17.034988 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.035046 kubelet[2677]: W0913 00:55:17.035024 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.035252 kubelet[2677]: E0913 00:55:17.035065 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.035608 kubelet[2677]: E0913 00:55:17.035572 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.035608 kubelet[2677]: W0913 00:55:17.035606 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.035885 kubelet[2677]: E0913 00:55:17.035646 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.036211 kubelet[2677]: E0913 00:55:17.036185 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.036355 kubelet[2677]: W0913 00:55:17.036211 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.036355 kubelet[2677]: E0913 00:55:17.036241 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.036719 kubelet[2677]: E0913 00:55:17.036693 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:55:17.036848 kubelet[2677]: W0913 00:55:17.036720 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:55:17.036848 kubelet[2677]: E0913 00:55:17.036750 2677 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:55:17.426411 env[1672]: time="2025-09-13T00:55:17.426344501Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:17.427610 env[1672]: time="2025-09-13T00:55:17.427571275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:17.429380 env[1672]: time="2025-09-13T00:55:17.429340738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:17.430874 env[1672]: time="2025-09-13T00:55:17.430842157Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:17.431585 env[1672]: time="2025-09-13T00:55:17.431553120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:55:17.433758 env[1672]: time="2025-09-13T00:55:17.433698037Z" level=info msg="CreateContainer within sandbox \"97775063a99d0a48f5cc91f2331b25e6d7c9a952da7b4e6e8161d29b5c655883\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:55:17.440412 env[1672]: time="2025-09-13T00:55:17.440342299Z" level=info msg="CreateContainer within sandbox \"97775063a99d0a48f5cc91f2331b25e6d7c9a952da7b4e6e8161d29b5c655883\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"57205716c450d3a062b2596a5620f42b357017f470f0fbe9909cb8280ef5d6fb\"" Sep 13 00:55:17.440680 env[1672]: time="2025-09-13T00:55:17.440654407Z" level=info msg="StartContainer for \"57205716c450d3a062b2596a5620f42b357017f470f0fbe9909cb8280ef5d6fb\"" Sep 13 00:55:17.480751 env[1672]: time="2025-09-13T00:55:17.480712933Z" level=info msg="StartContainer for \"57205716c450d3a062b2596a5620f42b357017f470f0fbe9909cb8280ef5d6fb\" returns successfully" Sep 13 00:55:17.506003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57205716c450d3a062b2596a5620f42b357017f470f0fbe9909cb8280ef5d6fb-rootfs.mount: Deactivated successfully. Sep 13 00:55:17.973506 kubelet[2677]: I0913 00:55:17.973456 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:18.457059 env[1672]: time="2025-09-13T00:55:18.456942581Z" level=info msg="shim disconnected" id=57205716c450d3a062b2596a5620f42b357017f470f0fbe9909cb8280ef5d6fb Sep 13 00:55:18.457059 env[1672]: time="2025-09-13T00:55:18.457050404Z" level=warning msg="cleaning up after shim disconnected" id=57205716c450d3a062b2596a5620f42b357017f470f0fbe9909cb8280ef5d6fb namespace=k8s.io Sep 13 00:55:18.458282 env[1672]: time="2025-09-13T00:55:18.457078921Z" level=info msg="cleaning up dead shim" Sep 13 00:55:18.472874 env[1672]: time="2025-09-13T00:55:18.472799006Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3516 runtime=io.containerd.runc.v2\n" Sep 13 00:55:18.916059 kubelet[2677]: E0913 00:55:18.915948 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzrs8" podUID="76f0a7cf-aca7-4535-904d-665ae5104c51" Sep 13 00:55:18.981061 env[1672]: time="2025-09-13T00:55:18.980956066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:55:20.916515 kubelet[2677]: E0913 00:55:20.916435 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzrs8" podUID="76f0a7cf-aca7-4535-904d-665ae5104c51" Sep 13 00:55:22.725503 env[1672]: time="2025-09-13T00:55:22.725454204Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:22.726168 env[1672]: time="2025-09-13T00:55:22.726123552Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:22.726792 env[1672]: time="2025-09-13T00:55:22.726708507Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:22.727570 env[1672]: time="2025-09-13T00:55:22.727529122Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:22.727905 env[1672]: time="2025-09-13T00:55:22.727863358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:55:22.729087 env[1672]: time="2025-09-13T00:55:22.729071837Z" level=info msg="CreateContainer within sandbox \"97775063a99d0a48f5cc91f2331b25e6d7c9a952da7b4e6e8161d29b5c655883\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:55:22.733538 env[1672]: time="2025-09-13T00:55:22.733495457Z" level=info msg="CreateContainer within sandbox \"97775063a99d0a48f5cc91f2331b25e6d7c9a952da7b4e6e8161d29b5c655883\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dc35a771184e879f3c9a05e526a96914f668e5d88f1170e970497cf342249c61\"" Sep 13 00:55:22.733804 env[1672]: time="2025-09-13T00:55:22.733755189Z" level=info msg="StartContainer for \"dc35a771184e879f3c9a05e526a96914f668e5d88f1170e970497cf342249c61\"" Sep 13 00:55:22.756722 env[1672]: time="2025-09-13T00:55:22.756666567Z" level=info msg="StartContainer for \"dc35a771184e879f3c9a05e526a96914f668e5d88f1170e970497cf342249c61\" returns successfully" Sep 13 00:55:22.915741 kubelet[2677]: E0913 00:55:22.915632 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzrs8" podUID="76f0a7cf-aca7-4535-904d-665ae5104c51" Sep 13 00:55:23.691753 env[1672]: time="2025-09-13T00:55:23.691628375Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:55:23.739567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc35a771184e879f3c9a05e526a96914f668e5d88f1170e970497cf342249c61-rootfs.mount: Deactivated successfully. Sep 13 00:55:23.749005 kubelet[2677]: I0913 00:55:23.748954 2677 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:55:23.879113 kubelet[2677]: I0913 00:55:23.879008 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw2kj\" (UniqueName: \"kubernetes.io/projected/1ce64396-8b92-4683-bf8f-d8bcb3fc6a06-kube-api-access-xw2kj\") pod \"coredns-7c65d6cfc9-ht5gv\" (UID: \"1ce64396-8b92-4683-bf8f-d8bcb3fc6a06\") " pod="kube-system/coredns-7c65d6cfc9-ht5gv" Sep 13 00:55:23.879113 kubelet[2677]: I0913 00:55:23.879099 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/75fdc49e-31b1-401f-8cb1-69f2cb356414-calico-apiserver-certs\") pod \"calico-apiserver-77cc844975-jtt5x\" (UID: \"75fdc49e-31b1-401f-8cb1-69f2cb356414\") " pod="calico-apiserver/calico-apiserver-77cc844975-jtt5x" Sep 13 00:55:23.879678 kubelet[2677]: I0913 00:55:23.879190 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe91dfa-20ea-43ed-b9ae-2f363b41f123-tigera-ca-bundle\") pod \"calico-kube-controllers-fddd77667-rhh4p\" (UID: \"afe91dfa-20ea-43ed-b9ae-2f363b41f123\") " pod="calico-system/calico-kube-controllers-fddd77667-rhh4p" Sep 13 00:55:23.879678 kubelet[2677]: I0913 00:55:23.879342 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1eff7f-6190-416d-98ad-c67415ecaa0b-config-volume\") pod \"coredns-7c65d6cfc9-bzpbb\" (UID: \"ab1eff7f-6190-416d-98ad-c67415ecaa0b\") " pod="kube-system/coredns-7c65d6cfc9-bzpbb" Sep 13 00:55:23.879678 kubelet[2677]: I0913 00:55:23.879499 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ce64396-8b92-4683-bf8f-d8bcb3fc6a06-config-volume\") pod \"coredns-7c65d6cfc9-ht5gv\" (UID: \"1ce64396-8b92-4683-bf8f-d8bcb3fc6a06\") " pod="kube-system/coredns-7c65d6cfc9-ht5gv" Sep 13 00:55:23.879678 kubelet[2677]: I0913 00:55:23.879639 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aefc875-a5a0-4dd2-a7a7-adf706fc5036-goldmane-ca-bundle\") pod \"goldmane-7988f88666-mk7xg\" (UID: \"7aefc875-a5a0-4dd2-a7a7-adf706fc5036\") " pod="calico-system/goldmane-7988f88666-mk7xg" Sep 13 00:55:23.880281 kubelet[2677]: I0913 00:55:23.879708 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn2pz\" (UniqueName: \"kubernetes.io/projected/7aefc875-a5a0-4dd2-a7a7-adf706fc5036-kube-api-access-xn2pz\") pod \"goldmane-7988f88666-mk7xg\" (UID: \"7aefc875-a5a0-4dd2-a7a7-adf706fc5036\") " pod="calico-system/goldmane-7988f88666-mk7xg" Sep 13 00:55:23.880281 kubelet[2677]: I0913 00:55:23.879765 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7aefc875-a5a0-4dd2-a7a7-adf706fc5036-goldmane-key-pair\") pod \"goldmane-7988f88666-mk7xg\" (UID: \"7aefc875-a5a0-4dd2-a7a7-adf706fc5036\") " pod="calico-system/goldmane-7988f88666-mk7xg" Sep 13 00:55:23.880281 kubelet[2677]: I0913 00:55:23.879814 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15b25ee3-c882-4d6d-87fd-8435c4ab9603-calico-apiserver-certs\") pod \"calico-apiserver-77cc844975-7r74t\" (UID: \"15b25ee3-c882-4d6d-87fd-8435c4ab9603\") " pod="calico-apiserver/calico-apiserver-77cc844975-7r74t" Sep 13 00:55:23.880281 kubelet[2677]: I0913 00:55:23.879893 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww5qb\" (UniqueName: \"kubernetes.io/projected/ab1eff7f-6190-416d-98ad-c67415ecaa0b-kube-api-access-ww5qb\") pod \"coredns-7c65d6cfc9-bzpbb\" (UID: \"ab1eff7f-6190-416d-98ad-c67415ecaa0b\") " pod="kube-system/coredns-7c65d6cfc9-bzpbb" Sep 13 00:55:23.880281 kubelet[2677]: I0913 00:55:23.879940 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d9v8\" (UniqueName: \"kubernetes.io/projected/15b25ee3-c882-4d6d-87fd-8435c4ab9603-kube-api-access-7d9v8\") pod \"calico-apiserver-77cc844975-7r74t\" (UID: \"15b25ee3-c882-4d6d-87fd-8435c4ab9603\") " pod="calico-apiserver/calico-apiserver-77cc844975-7r74t" Sep 13 00:55:23.880914 kubelet[2677]: I0913 00:55:23.879986 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-whisker-backend-key-pair\") pod \"whisker-7c8756dc7f-dsnw2\" (UID: \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\") " pod="calico-system/whisker-7c8756dc7f-dsnw2" Sep 13 00:55:23.880914 kubelet[2677]: I0913 00:55:23.880028 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aefc875-a5a0-4dd2-a7a7-adf706fc5036-config\") pod \"goldmane-7988f88666-mk7xg\" (UID: \"7aefc875-a5a0-4dd2-a7a7-adf706fc5036\") " pod="calico-system/goldmane-7988f88666-mk7xg" Sep 13 00:55:23.880914 kubelet[2677]: I0913 00:55:23.880073 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgcq\" (UniqueName: \"kubernetes.io/projected/afe91dfa-20ea-43ed-b9ae-2f363b41f123-kube-api-access-vkgcq\") pod \"calico-kube-controllers-fddd77667-rhh4p\" (UID: \"afe91dfa-20ea-43ed-b9ae-2f363b41f123\") " pod="calico-system/calico-kube-controllers-fddd77667-rhh4p" Sep 13 00:55:23.880914 kubelet[2677]: I0913 00:55:23.880152 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-whisker-ca-bundle\") pod \"whisker-7c8756dc7f-dsnw2\" (UID: \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\") " pod="calico-system/whisker-7c8756dc7f-dsnw2" Sep 13 00:55:23.880914 kubelet[2677]: I0913 00:55:23.880253 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbqh\" (UniqueName: \"kubernetes.io/projected/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-kube-api-access-ztbqh\") pod \"whisker-7c8756dc7f-dsnw2\" (UID: \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\") " pod="calico-system/whisker-7c8756dc7f-dsnw2" Sep 13 00:55:23.881446 kubelet[2677]: I0913 00:55:23.880404 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr7ff\" (UniqueName: \"kubernetes.io/projected/75fdc49e-31b1-401f-8cb1-69f2cb356414-kube-api-access-nr7ff\") pod \"calico-apiserver-77cc844975-jtt5x\" (UID: \"75fdc49e-31b1-401f-8cb1-69f2cb356414\") " pod="calico-apiserver/calico-apiserver-77cc844975-jtt5x" Sep 13 00:55:24.098228 env[1672]: time="2025-09-13T00:55:24.098127300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cc844975-7r74t,Uid:15b25ee3-c882-4d6d-87fd-8435c4ab9603,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:55:24.099083 env[1672]: time="2025-09-13T00:55:24.098407381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzpbb,Uid:ab1eff7f-6190-416d-98ad-c67415ecaa0b,Namespace:kube-system,Attempt:0,}" Sep 13 00:55:24.101585 env[1672]: time="2025-09-13T00:55:24.101498841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ht5gv,Uid:1ce64396-8b92-4683-bf8f-d8bcb3fc6a06,Namespace:kube-system,Attempt:0,}" Sep 13 00:55:24.102094 env[1672]: time="2025-09-13T00:55:24.102028252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c8756dc7f-dsnw2,Uid:a4678133-1f4f-4e9d-a4f3-00e8af69f3bc,Namespace:calico-system,Attempt:0,}" Sep 13 00:55:24.103885 env[1672]: time="2025-09-13T00:55:24.103825741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cc844975-jtt5x,Uid:75fdc49e-31b1-401f-8cb1-69f2cb356414,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:55:24.105897 env[1672]: time="2025-09-13T00:55:24.105790310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddd77667-rhh4p,Uid:afe91dfa-20ea-43ed-b9ae-2f363b41f123,Namespace:calico-system,Attempt:0,}" Sep 13 00:55:24.106554 env[1672]: time="2025-09-13T00:55:24.106446888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-mk7xg,Uid:7aefc875-a5a0-4dd2-a7a7-adf706fc5036,Namespace:calico-system,Attempt:0,}" Sep 13 00:55:24.125143 env[1672]: time="2025-09-13T00:55:24.125036273Z" level=info msg="shim disconnected" id=dc35a771184e879f3c9a05e526a96914f668e5d88f1170e970497cf342249c61 Sep 13 00:55:24.125498 env[1672]: time="2025-09-13T00:55:24.125147298Z" level=warning msg="cleaning up after shim disconnected" id=dc35a771184e879f3c9a05e526a96914f668e5d88f1170e970497cf342249c61 namespace=k8s.io Sep 13 00:55:24.125498 env[1672]: time="2025-09-13T00:55:24.125178157Z" level=info msg="cleaning up dead shim" Sep 13 00:55:24.140559 env[1672]: time="2025-09-13T00:55:24.140456013Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3612 runtime=io.containerd.runc.v2\n" Sep 13 00:55:24.204414 env[1672]: time="2025-09-13T00:55:24.204338680Z" level=error msg="Failed to destroy network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.204696 env[1672]: time="2025-09-13T00:55:24.204668193Z" level=error msg="encountered an error cleaning up failed sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.204756 env[1672]: time="2025-09-13T00:55:24.204713566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ht5gv,Uid:1ce64396-8b92-4683-bf8f-d8bcb3fc6a06,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.204920 kubelet[2677]: E0913 00:55:24.204894 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.205127 kubelet[2677]: E0913 00:55:24.204943 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ht5gv" Sep 13 00:55:24.205127 kubelet[2677]: E0913 00:55:24.204958 2677 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ht5gv" Sep 13 00:55:24.205127 kubelet[2677]: E0913 00:55:24.204986 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ht5gv_kube-system(1ce64396-8b92-4683-bf8f-d8bcb3fc6a06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ht5gv_kube-system(1ce64396-8b92-4683-bf8f-d8bcb3fc6a06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ht5gv" podUID="1ce64396-8b92-4683-bf8f-d8bcb3fc6a06" Sep 13 00:55:24.205628 env[1672]: time="2025-09-13T00:55:24.205592901Z" level=error msg="Failed to destroy network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.205902 env[1672]: time="2025-09-13T00:55:24.205876082Z" level=error msg="encountered an error cleaning up failed sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.205952 env[1672]: time="2025-09-13T00:55:24.205917608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c8756dc7f-dsnw2,Uid:a4678133-1f4f-4e9d-a4f3-00e8af69f3bc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.206064 kubelet[2677]: E0913 00:55:24.206048 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.206112 kubelet[2677]: E0913 00:55:24.206075 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c8756dc7f-dsnw2" Sep 13 00:55:24.206112 kubelet[2677]: E0913 00:55:24.206090 2677 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c8756dc7f-dsnw2" Sep 13 00:55:24.206178 kubelet[2677]: E0913 00:55:24.206113 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c8756dc7f-dsnw2_calico-system(a4678133-1f4f-4e9d-a4f3-00e8af69f3bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c8756dc7f-dsnw2_calico-system(a4678133-1f4f-4e9d-a4f3-00e8af69f3bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c8756dc7f-dsnw2" podUID="a4678133-1f4f-4e9d-a4f3-00e8af69f3bc" Sep 13 00:55:24.206628 env[1672]: time="2025-09-13T00:55:24.206603652Z" level=error msg="Failed to destroy network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.206730 env[1672]: time="2025-09-13T00:55:24.206704646Z" level=error msg="Failed to destroy network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.206807 env[1672]: time="2025-09-13T00:55:24.206791256Z" level=error msg="encountered an error cleaning up failed sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.206845 env[1672]: time="2025-09-13T00:55:24.206815829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cc844975-jtt5x,Uid:75fdc49e-31b1-401f-8cb1-69f2cb356414,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.206888 env[1672]: time="2025-09-13T00:55:24.206871024Z" level=error msg="encountered an error cleaning up failed sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.206914 env[1672]: time="2025-09-13T00:55:24.206893129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cc844975-7r74t,Uid:15b25ee3-c882-4d6d-87fd-8435c4ab9603,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.206948 kubelet[2677]: E0913 00:55:24.206901 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.206948 kubelet[2677]: E0913 00:55:24.206929 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77cc844975-jtt5x" Sep 13 00:55:24.206948 kubelet[2677]: E0913 00:55:24.206943 2677 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77cc844975-jtt5x" Sep 13 00:55:24.207016 kubelet[2677]: E0913 00:55:24.206961 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.207016 kubelet[2677]: E0913 00:55:24.206984 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77cc844975-7r74t" Sep 13 00:55:24.207016 kubelet[2677]: E0913 00:55:24.206995 2677 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77cc844975-7r74t" Sep 13 00:55:24.207077 kubelet[2677]: E0913 00:55:24.206964 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77cc844975-jtt5x_calico-apiserver(75fdc49e-31b1-401f-8cb1-69f2cb356414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77cc844975-jtt5x_calico-apiserver(75fdc49e-31b1-401f-8cb1-69f2cb356414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77cc844975-jtt5x" podUID="75fdc49e-31b1-401f-8cb1-69f2cb356414" Sep 13 00:55:24.207077 kubelet[2677]: E0913 00:55:24.207015 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77cc844975-7r74t_calico-apiserver(15b25ee3-c882-4d6d-87fd-8435c4ab9603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77cc844975-7r74t_calico-apiserver(15b25ee3-c882-4d6d-87fd-8435c4ab9603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77cc844975-7r74t" podUID="15b25ee3-c882-4d6d-87fd-8435c4ab9603" Sep 13 00:55:24.207206 env[1672]: time="2025-09-13T00:55:24.207191075Z" level=error msg="Failed to destroy network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.207340 env[1672]: time="2025-09-13T00:55:24.207326533Z" level=error msg="encountered an error cleaning up failed sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.207379 env[1672]: time="2025-09-13T00:55:24.207348546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzpbb,Uid:ab1eff7f-6190-416d-98ad-c67415ecaa0b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.207431 kubelet[2677]: E0913 00:55:24.207419 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.207455 kubelet[2677]: E0913 00:55:24.207440 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bzpbb" Sep 13 00:55:24.207479 kubelet[2677]: E0913 00:55:24.207453 2677 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bzpbb" Sep 13 00:55:24.207479 kubelet[2677]: E0913 00:55:24.207470 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bzpbb_kube-system(ab1eff7f-6190-416d-98ad-c67415ecaa0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bzpbb_kube-system(ab1eff7f-6190-416d-98ad-c67415ecaa0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bzpbb" podUID="ab1eff7f-6190-416d-98ad-c67415ecaa0b" Sep 13 00:55:24.207688 env[1672]: time="2025-09-13T00:55:24.207659876Z" level=error msg="Failed to destroy network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.207830 env[1672]: time="2025-09-13T00:55:24.207815158Z" level=error msg="encountered an error cleaning up failed sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.207861 env[1672]: time="2025-09-13T00:55:24.207836442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddd77667-rhh4p,Uid:afe91dfa-20ea-43ed-b9ae-2f363b41f123,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.207914 kubelet[2677]: E0913 00:55:24.207901 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.207944 kubelet[2677]: E0913 00:55:24.207922 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddd77667-rhh4p" Sep 13 00:55:24.207944 kubelet[2677]: E0913 00:55:24.207932 2677 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fddd77667-rhh4p" Sep 13 00:55:24.207987 kubelet[2677]: E0913 00:55:24.207950 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fddd77667-rhh4p_calico-system(afe91dfa-20ea-43ed-b9ae-2f363b41f123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fddd77667-rhh4p_calico-system(afe91dfa-20ea-43ed-b9ae-2f363b41f123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddd77667-rhh4p" podUID="afe91dfa-20ea-43ed-b9ae-2f363b41f123" Sep 13 00:55:24.210002 env[1672]: time="2025-09-13T00:55:24.209955585Z" level=error msg="Failed to destroy network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.210131 env[1672]: time="2025-09-13T00:55:24.210093302Z" level=error msg="encountered an error cleaning up failed sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.210131 env[1672]: time="2025-09-13T00:55:24.210116203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-mk7xg,Uid:7aefc875-a5a0-4dd2-a7a7-adf706fc5036,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.210196 kubelet[2677]: E0913 00:55:24.210179 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.210221 kubelet[2677]: E0913 00:55:24.210198 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-mk7xg" Sep 13 00:55:24.210221 kubelet[2677]: E0913 00:55:24.210209 2677 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-mk7xg" Sep 13 00:55:24.210268 kubelet[2677]: E0913 00:55:24.210232 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-mk7xg_calico-system(7aefc875-a5a0-4dd2-a7a7-adf706fc5036)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-mk7xg_calico-system(7aefc875-a5a0-4dd2-a7a7-adf706fc5036)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-mk7xg" podUID="7aefc875-a5a0-4dd2-a7a7-adf706fc5036" Sep 13 00:55:24.921452 env[1672]: time="2025-09-13T00:55:24.921341812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzrs8,Uid:76f0a7cf-aca7-4535-904d-665ae5104c51,Namespace:calico-system,Attempt:0,}" Sep 13 00:55:24.950731 env[1672]: time="2025-09-13T00:55:24.950662454Z" level=error msg="Failed to destroy network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.950935 env[1672]: time="2025-09-13T00:55:24.950887588Z" level=error msg="encountered an error cleaning up failed sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.950935 env[1672]: time="2025-09-13T00:55:24.950919530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzrs8,Uid:76f0a7cf-aca7-4535-904d-665ae5104c51,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.951099 kubelet[2677]: E0913 00:55:24.951050 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:24.951099 kubelet[2677]: E0913 00:55:24.951088 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rzrs8" Sep 13 00:55:24.951168 kubelet[2677]: E0913 00:55:24.951103 2677 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rzrs8" Sep 13 00:55:24.951168 kubelet[2677]: E0913 00:55:24.951130 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rzrs8_calico-system(76f0a7cf-aca7-4535-904d-665ae5104c51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rzrs8_calico-system(76f0a7cf-aca7-4535-904d-665ae5104c51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rzrs8" podUID="76f0a7cf-aca7-4535-904d-665ae5104c51" Sep 13 00:55:24.994095 kubelet[2677]: I0913 00:55:24.994082 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:24.994491 env[1672]: time="2025-09-13T00:55:24.994476161Z" level=info msg="StopPodSandbox for \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\"" Sep 13 00:55:24.994537 kubelet[2677]: I0913 00:55:24.994524 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:24.994766 env[1672]: time="2025-09-13T00:55:24.994753199Z" level=info msg="StopPodSandbox for \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\"" Sep 13 00:55:24.994983 kubelet[2677]: I0913 00:55:24.994974 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:24.995206 env[1672]: time="2025-09-13T00:55:24.995190328Z" level=info msg="StopPodSandbox for \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\"" Sep 13 00:55:24.996475 kubelet[2677]: I0913 00:55:24.996458 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:24.996661 env[1672]: time="2025-09-13T00:55:24.996644773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:55:24.996763 env[1672]: time="2025-09-13T00:55:24.996747439Z" level=info msg="StopPodSandbox for \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\"" Sep 13 00:55:24.996942 kubelet[2677]: I0913 00:55:24.996931 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:24.997240 env[1672]: time="2025-09-13T00:55:24.997216470Z" level=info msg="StopPodSandbox for \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\"" Sep 13 00:55:24.997585 kubelet[2677]: I0913 00:55:24.997570 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:24.998012 env[1672]: time="2025-09-13T00:55:24.997979773Z" level=info msg="StopPodSandbox for \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\"" Sep 13 00:55:24.998223 kubelet[2677]: I0913 00:55:24.998205 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:24.998722 env[1672]: time="2025-09-13T00:55:24.998693917Z" level=info msg="StopPodSandbox for \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\"" Sep 13 00:55:24.998816 kubelet[2677]: I0913 00:55:24.998776 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:24.999190 env[1672]: time="2025-09-13T00:55:24.999170829Z" level=info msg="StopPodSandbox for \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\"" Sep 13 00:55:25.013242 env[1672]: time="2025-09-13T00:55:25.013195074Z" level=error msg="StopPodSandbox for \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\" failed" error="failed to destroy network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:25.013377 env[1672]: time="2025-09-13T00:55:25.013265651Z" level=error msg="StopPodSandbox for \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\" failed" error="failed to destroy network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:25.013422 kubelet[2677]: E0913 00:55:25.013372 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:25.013464 kubelet[2677]: E0913 00:55:25.013411 2677 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57"} Sep 13 00:55:25.013464 kubelet[2677]: E0913 00:55:25.013447 2677 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"afe91dfa-20ea-43ed-b9ae-2f363b41f123\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:55:25.013464 kubelet[2677]: E0913 00:55:25.013372 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:25.013575 kubelet[2677]: E0913 00:55:25.013463 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"afe91dfa-20ea-43ed-b9ae-2f363b41f123\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fddd77667-rhh4p" podUID="afe91dfa-20ea-43ed-b9ae-2f363b41f123" Sep 13 00:55:25.013575 kubelet[2677]: E0913 00:55:25.013465 2677 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f"} Sep 13 00:55:25.013575 kubelet[2677]: E0913 00:55:25.013489 2677 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ce64396-8b92-4683-bf8f-d8bcb3fc6a06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:55:25.013575 kubelet[2677]: E0913 00:55:25.013499 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ce64396-8b92-4683-bf8f-d8bcb3fc6a06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ht5gv" podUID="1ce64396-8b92-4683-bf8f-d8bcb3fc6a06" Sep 13 00:55:25.013737 env[1672]: time="2025-09-13T00:55:25.013509271Z" level=error msg="StopPodSandbox for \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\" failed" error="failed to destroy network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:25.013737 env[1672]: time="2025-09-13T00:55:25.013520362Z" level=error msg="StopPodSandbox for \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\" failed" error="failed to destroy network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:25.013737 env[1672]: time="2025-09-13T00:55:25.013633458Z" level=error msg="StopPodSandbox for \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\" failed" error="failed to destroy network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:25.013737 env[1672]: time="2025-09-13T00:55:25.013682415Z" level=error msg="StopPodSandbox for \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\" failed" error="failed to destroy network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:25.013821 kubelet[2677]: E0913 00:55:25.013583 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:25.013821 kubelet[2677]: E0913 00:55:25.013595 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:25.013821 kubelet[2677]: E0913 00:55:25.013613 2677 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89"} Sep 13 00:55:25.013821 kubelet[2677]: E0913 00:55:25.013630 2677 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:55:25.013918 kubelet[2677]: E0913 00:55:25.013641 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c8756dc7f-dsnw2" podUID="a4678133-1f4f-4e9d-a4f3-00e8af69f3bc" Sep 13 00:55:25.013918 kubelet[2677]: E0913 00:55:25.013600 2677 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825"} Sep 13 00:55:25.013918 kubelet[2677]: E0913 00:55:25.013660 2677 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7aefc875-a5a0-4dd2-a7a7-adf706fc5036\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:55:25.013918 kubelet[2677]: E0913 00:55:25.013668 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7aefc875-a5a0-4dd2-a7a7-adf706fc5036\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-mk7xg" podUID="7aefc875-a5a0-4dd2-a7a7-adf706fc5036" Sep 13 00:55:25.014031 kubelet[2677]: E0913 00:55:25.013706 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:25.014031 kubelet[2677]: E0913 00:55:25.013718 2677 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d"} Sep 13 00:55:25.014031 kubelet[2677]: E0913 00:55:25.013736 2677 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15b25ee3-c882-4d6d-87fd-8435c4ab9603\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:55:25.014031 kubelet[2677]: E0913 00:55:25.013750 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:25.014031 kubelet[2677]: E0913 00:55:25.013765 2677 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91"} Sep 13 00:55:25.014137 env[1672]: time="2025-09-13T00:55:25.013959378Z" level=error msg="StopPodSandbox for \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\" failed" error="failed to destroy network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:25.014161 kubelet[2677]: E0913 00:55:25.013779 2677 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75fdc49e-31b1-401f-8cb1-69f2cb356414\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:55:25.014161 kubelet[2677]: E0913 00:55:25.013795 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75fdc49e-31b1-401f-8cb1-69f2cb356414\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77cc844975-jtt5x" podUID="75fdc49e-31b1-401f-8cb1-69f2cb356414" Sep 13 00:55:25.014161 kubelet[2677]: E0913 00:55:25.013752 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15b25ee3-c882-4d6d-87fd-8435c4ab9603\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77cc844975-7r74t" podUID="15b25ee3-c882-4d6d-87fd-8435c4ab9603" Sep 13 00:55:25.014260 kubelet[2677]: E0913 00:55:25.014023 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:25.014260 kubelet[2677]: E0913 00:55:25.014038 2677 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298"} Sep 13 00:55:25.014260 kubelet[2677]: E0913 00:55:25.014050 2677 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76f0a7cf-aca7-4535-904d-665ae5104c51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:55:25.014260 kubelet[2677]: E0913 00:55:25.014060 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76f0a7cf-aca7-4535-904d-665ae5104c51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rzrs8" podUID="76f0a7cf-aca7-4535-904d-665ae5104c51" Sep 13 00:55:25.016338 env[1672]: time="2025-09-13T00:55:25.016316579Z" level=error msg="StopPodSandbox for \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\" failed" error="failed to destroy network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:55:25.016460 kubelet[2677]: E0913 00:55:25.016401 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:25.016460 kubelet[2677]: E0913 00:55:25.016431 2677 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04"} Sep 13 00:55:25.016460 kubelet[2677]: E0913 00:55:25.016447 2677 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab1eff7f-6190-416d-98ad-c67415ecaa0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:55:25.016564 kubelet[2677]: E0913 00:55:25.016460 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab1eff7f-6190-416d-98ad-c67415ecaa0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bzpbb" podUID="ab1eff7f-6190-416d-98ad-c67415ecaa0b" Sep 13 00:55:31.801027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896893883.mount: Deactivated successfully. Sep 13 00:55:31.817972 env[1672]: time="2025-09-13T00:55:31.817894063Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:31.819186 env[1672]: time="2025-09-13T00:55:31.819157453Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:31.821046 env[1672]: time="2025-09-13T00:55:31.820978495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:31.822596 env[1672]: time="2025-09-13T00:55:31.822518039Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:31.823482 env[1672]: time="2025-09-13T00:55:31.823410309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:55:31.832824 env[1672]: time="2025-09-13T00:55:31.832776671Z" level=info msg="CreateContainer within sandbox \"97775063a99d0a48f5cc91f2331b25e6d7c9a952da7b4e6e8161d29b5c655883\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:55:31.838400 env[1672]: time="2025-09-13T00:55:31.838337540Z" level=info msg="CreateContainer within sandbox \"97775063a99d0a48f5cc91f2331b25e6d7c9a952da7b4e6e8161d29b5c655883\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8aea048abb1bc39a2b880f2c22cdd9780eb8a3ea1403ecb497bd49e987560da3\"" Sep 13 00:55:31.838756 env[1672]: time="2025-09-13T00:55:31.838741835Z" level=info msg="StartContainer for \"8aea048abb1bc39a2b880f2c22cdd9780eb8a3ea1403ecb497bd49e987560da3\"" Sep 13 00:55:31.861969 env[1672]: time="2025-09-13T00:55:31.861945276Z" level=info msg="StartContainer for \"8aea048abb1bc39a2b880f2c22cdd9780eb8a3ea1403ecb497bd49e987560da3\" returns successfully" Sep 13 00:55:32.021139 kubelet[2677]: I0913 00:55:32.021105 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5fz8w" podStartSLOduration=0.78316261 podStartE2EDuration="19.021091986s" podCreationTimestamp="2025-09-13 00:55:13 +0000 UTC" firstStartedPulling="2025-09-13 00:55:13.586538837 +0000 UTC m=+16.747656981" lastFinishedPulling="2025-09-13 00:55:31.824468203 +0000 UTC m=+34.985586357" observedRunningTime="2025-09-13 00:55:32.019523777 +0000 UTC m=+35.180641916" watchObservedRunningTime="2025-09-13 00:55:32.021091986 +0000 UTC m=+35.182210120" Sep 13 00:55:32.022761 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:55:32.022803 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:55:32.079120 env[1672]: time="2025-09-13T00:55:32.079037532Z" level=info msg="StopPodSandbox for \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\"" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.102 [INFO][4234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.102 [INFO][4234] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" iface="eth0" netns="/var/run/netns/cni-48414cb3-f5da-60ba-527a-c22c188f5908" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.103 [INFO][4234] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" iface="eth0" netns="/var/run/netns/cni-48414cb3-f5da-60ba-527a-c22c188f5908" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.103 [INFO][4234] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" iface="eth0" netns="/var/run/netns/cni-48414cb3-f5da-60ba-527a-c22c188f5908" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.103 [INFO][4234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.103 [INFO][4234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.112 [INFO][4252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" HandleID="k8s-pod-network.49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.112 [INFO][4252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.112 [INFO][4252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.116 [WARNING][4252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" HandleID="k8s-pod-network.49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.116 [INFO][4252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" HandleID="k8s-pod-network.49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.117 [INFO][4252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:32.119660 env[1672]: 2025-09-13 00:55:32.118 [INFO][4234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:32.119975 env[1672]: time="2025-09-13T00:55:32.119719958Z" level=info msg="TearDown network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\" successfully" Sep 13 00:55:32.119975 env[1672]: time="2025-09-13T00:55:32.119739610Z" level=info msg="StopPodSandbox for \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\" returns successfully" Sep 13 00:55:32.235815 kubelet[2677]: I0913 00:55:32.235747 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-whisker-ca-bundle\") pod \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\" (UID: \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\") " Sep 13 00:55:32.236046 kubelet[2677]: I0913 00:55:32.235832 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-whisker-backend-key-pair\") pod \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\" (UID: \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\") " Sep 13 00:55:32.236046 kubelet[2677]: I0913 00:55:32.235915 2677 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztbqh\" (UniqueName: \"kubernetes.io/projected/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-kube-api-access-ztbqh\") pod \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\" (UID: \"a4678133-1f4f-4e9d-a4f3-00e8af69f3bc\") " Sep 13 00:55:32.236637 kubelet[2677]: I0913 00:55:32.236514 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a4678133-1f4f-4e9d-a4f3-00e8af69f3bc" (UID: "a4678133-1f4f-4e9d-a4f3-00e8af69f3bc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:55:32.242125 kubelet[2677]: I0913 00:55:32.242054 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-kube-api-access-ztbqh" (OuterVolumeSpecName: "kube-api-access-ztbqh") pod "a4678133-1f4f-4e9d-a4f3-00e8af69f3bc" (UID: "a4678133-1f4f-4e9d-a4f3-00e8af69f3bc"). InnerVolumeSpecName "kube-api-access-ztbqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:55:32.242328 kubelet[2677]: I0913 00:55:32.242147 2677 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a4678133-1f4f-4e9d-a4f3-00e8af69f3bc" (UID: "a4678133-1f4f-4e9d-a4f3-00e8af69f3bc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:55:32.337382 kubelet[2677]: I0913 00:55:32.337198 2677 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-d04f0c45dd\" DevicePath \"\"" Sep 13 00:55:32.337382 kubelet[2677]: I0913 00:55:32.337258 2677 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztbqh\" (UniqueName: \"kubernetes.io/projected/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-kube-api-access-ztbqh\") on node \"ci-3510.3.8-n-d04f0c45dd\" DevicePath \"\"" Sep 13 00:55:32.337382 kubelet[2677]: I0913 00:55:32.337288 2677 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc-whisker-ca-bundle\") on node \"ci-3510.3.8-n-d04f0c45dd\" DevicePath \"\"" Sep 13 00:55:32.804442 systemd[1]: run-netns-cni\x2d48414cb3\x2df5da\x2d60ba\x2d527a\x2dc22c188f5908.mount: Deactivated successfully. Sep 13 00:55:32.804513 systemd[1]: var-lib-kubelet-pods-a4678133\x2d1f4f\x2d4e9d\x2da4f3\x2d00e8af69f3bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dztbqh.mount: Deactivated successfully. Sep 13 00:55:32.804571 systemd[1]: var-lib-kubelet-pods-a4678133\x2d1f4f\x2d4e9d\x2da4f3\x2d00e8af69f3bc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:55:33.144169 kubelet[2677]: I0913 00:55:33.143951 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlsj5\" (UniqueName: \"kubernetes.io/projected/57717e74-1cc1-4207-9a9f-7cd1f2784e12-kube-api-access-tlsj5\") pod \"whisker-68fb5d8b94-bxzkg\" (UID: \"57717e74-1cc1-4207-9a9f-7cd1f2784e12\") " pod="calico-system/whisker-68fb5d8b94-bxzkg" Sep 13 00:55:33.144169 kubelet[2677]: I0913 00:55:33.144064 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57717e74-1cc1-4207-9a9f-7cd1f2784e12-whisker-backend-key-pair\") pod \"whisker-68fb5d8b94-bxzkg\" (UID: \"57717e74-1cc1-4207-9a9f-7cd1f2784e12\") " pod="calico-system/whisker-68fb5d8b94-bxzkg" Sep 13 00:55:33.144169 kubelet[2677]: I0913 00:55:33.144117 2677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57717e74-1cc1-4207-9a9f-7cd1f2784e12-whisker-ca-bundle\") pod \"whisker-68fb5d8b94-bxzkg\" (UID: \"57717e74-1cc1-4207-9a9f-7cd1f2784e12\") " pod="calico-system/whisker-68fb5d8b94-bxzkg" Sep 13 00:55:33.379000 audit[4324]: AVC avc: denied { write } for pid=4324 comm="tee" name="fd" dev="proc" ino=28622 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:55:33.385853 env[1672]: time="2025-09-13T00:55:33.385825842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68fb5d8b94-bxzkg,Uid:57717e74-1cc1-4207-9a9f-7cd1f2784e12,Namespace:calico-system,Attempt:0,}" Sep 13 00:55:33.407287 kernel: kauditd_printk_skb: 19 callbacks suppressed Sep 13 00:55:33.407390 kernel: audit: type=1400 audit(1757724933.379:274): avc: denied { write } for pid=4324 comm="tee" name="fd" dev="proc" ino=28622 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:55:33.379000 audit[4324]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2d6d87c0 a2=241 a3=1b6 items=1 ppid=4292 pid=4324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.567448 kernel: audit: type=1300 audit(1757724933.379:274): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2d6d87c0 a2=241 a3=1b6 items=1 ppid=4292 pid=4324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.567501 kernel: audit: type=1307 audit(1757724933.379:274): cwd="/etc/service/enabled/bird/log" Sep 13 00:55:33.379000 audit: CWD cwd="/etc/service/enabled/bird/log" Sep 13 00:55:33.597152 kernel: audit: type=1302 audit(1757724933.379:274): item=0 name="/dev/fd/63" inode=39033 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:55:33.379000 audit: PATH item=0 name="/dev/fd/63" inode=39033 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:55:33.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:55:33.721778 kernel: audit: type=1327 audit(1757724933.379:274): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:55:33.721817 kernel: audit: type=1400 audit(1757724933.379:275): avc: denied { write } for pid=4332 comm="tee" name="fd" dev="proc" ino=20415 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:55:33.379000 audit[4332]: AVC avc: denied { write } for pid=4332 comm="tee" name="fd" dev="proc" ino=20415 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:55:33.785200 kernel: audit: type=1300 audit(1757724933.379:275): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd813617af a2=241 a3=1b6 items=1 ppid=4296 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.379000 audit[4332]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd813617af a2=241 a3=1b6 items=1 ppid=4296 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.879513 kernel: audit: type=1307 audit(1757724933.379:275): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:55:33.379000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Sep 13 00:55:33.910418 kernel: audit: type=1302 audit(1757724933.379:275): item=0 name="/dev/fd/63" inode=20412 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:55:33.379000 audit: PATH item=0 name="/dev/fd/63" inode=20412 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:55:33.974259 kernel: audit: type=1327 audit(1757724933.379:275): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:55:33.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:55:33.379000 audit[4337]: AVC avc: denied { write } for pid=4337 comm="tee" name="fd" dev="proc" ino=30327 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:55:33.379000 audit[4337]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffef83ed7c1 a2=241 a3=1b6 items=1 ppid=4299 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.379000 audit: CWD cwd="/etc/service/enabled/cni/log" Sep 13 00:55:33.379000 audit: PATH item=0 name="/dev/fd/63" inode=39034 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:55:33.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:55:33.379000 audit[4336]: AVC avc: denied { write } for pid=4336 comm="tee" name="fd" dev="proc" ino=18371 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:55:33.379000 audit[4336]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff3cb0e7b0 a2=241 a3=1b6 items=1 ppid=4295 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.379000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Sep 13 00:55:33.379000 audit[4338]: AVC avc: denied { write } for pid=4338 comm="tee" name="fd" dev="proc" ino=27594 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:55:33.379000 audit: PATH item=0 name="/dev/fd/63" inode=18368 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:55:33.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:55:33.379000 audit[4338]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf03ad7bf a2=241 a3=1b6 items=1 ppid=4298 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.379000 audit: CWD cwd="/etc/service/enabled/felix/log" Sep 13 00:55:33.379000 audit: PATH item=0 name="/dev/fd/63" inode=27591 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:55:33.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:55:33.379000 audit[4335]: AVC avc: denied { write } for pid=4335 comm="tee" name="fd" dev="proc" ino=34295 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:55:33.379000 audit[4335]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd832a97bf a2=241 a3=1b6 items=1 ppid=4294 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.379000 audit: CWD cwd="/etc/service/enabled/bird6/log" Sep 13 00:55:33.379000 audit: PATH item=0 name="/dev/fd/63" inode=34292 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:55:33.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:55:33.379000 audit[4333]: AVC avc: denied { write } for pid=4333 comm="tee" name="fd" dev="proc" ino=35302 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Sep 13 00:55:33.379000 audit[4333]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe07527bf a2=241 a3=1b6 items=1 ppid=4293 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:33.379000 audit: CWD cwd="/etc/service/enabled/confd/log" Sep 13 00:55:33.379000 audit: PATH item=0 name="/dev/fd/63" inode=30324 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:55:33.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Sep 13 00:55:34.036142 systemd-networkd[1410]: cali682e8de6bca: Link UP Sep 13 00:55:34.091275 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:55:34.091312 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali682e8de6bca: link becomes ready Sep 13 00:55:34.091371 systemd-networkd[1410]: cali682e8de6bca: Gained carrier Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.404 [INFO][4391] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.414 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0 whisker-68fb5d8b94- calico-system 57717e74-1cc1-4207-9a9f-7cd1f2784e12 896 0 2025-09-13 00:55:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68fb5d8b94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-d04f0c45dd whisker-68fb5d8b94-bxzkg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali682e8de6bca [] [] }} ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Namespace="calico-system" Pod="whisker-68fb5d8b94-bxzkg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.414 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Namespace="calico-system" Pod="whisker-68fb5d8b94-bxzkg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.427 [INFO][4424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" HandleID="k8s-pod-network.5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.427 [INFO][4424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" HandleID="k8s-pod-network.5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-d04f0c45dd", "pod":"whisker-68fb5d8b94-bxzkg", "timestamp":"2025-09-13 00:55:33.427638157 +0000 UTC"}, Hostname:"ci-3510.3.8-n-d04f0c45dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.427 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.427 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.427 [INFO][4424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-d04f0c45dd' Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.567 [INFO][4424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.571 [INFO][4424] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.573 [INFO][4424] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.575 [INFO][4424] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.576 [INFO][4424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.576 [INFO][4424] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.577 [INFO][4424] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242 Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.579 [INFO][4424] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.583 [INFO][4424] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.1/26] block=192.168.13.0/26 handle="k8s-pod-network.5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.583 [INFO][4424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.1/26] handle="k8s-pod-network.5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.583 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:34.097999 env[1672]: 2025-09-13 00:55:33.583 [INFO][4424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.1/26] IPv6=[] ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" HandleID="k8s-pod-network.5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" Sep 13 00:55:34.098441 env[1672]: 2025-09-13 00:55:33.584 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Namespace="calico-system" Pod="whisker-68fb5d8b94-bxzkg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0", GenerateName:"whisker-68fb5d8b94-", Namespace:"calico-system", SelfLink:"", UID:"57717e74-1cc1-4207-9a9f-7cd1f2784e12", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68fb5d8b94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"", Pod:"whisker-68fb5d8b94-bxzkg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali682e8de6bca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:34.098441 env[1672]: 2025-09-13 00:55:33.584 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.1/32] ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Namespace="calico-system" Pod="whisker-68fb5d8b94-bxzkg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" Sep 13 00:55:34.098441 env[1672]: 2025-09-13 00:55:33.584 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali682e8de6bca ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Namespace="calico-system" Pod="whisker-68fb5d8b94-bxzkg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" Sep 13 00:55:34.098441 env[1672]: 2025-09-13 00:55:34.091 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Namespace="calico-system" Pod="whisker-68fb5d8b94-bxzkg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" Sep 13 00:55:34.098441 env[1672]: 2025-09-13 00:55:34.091 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Namespace="calico-system" Pod="whisker-68fb5d8b94-bxzkg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0", GenerateName:"whisker-68fb5d8b94-", Namespace:"calico-system", SelfLink:"", UID:"57717e74-1cc1-4207-9a9f-7cd1f2784e12", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68fb5d8b94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242", Pod:"whisker-68fb5d8b94-bxzkg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali682e8de6bca", MAC:"2a:7f:3e:42:7e:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:34.098441 env[1672]: 2025-09-13 00:55:34.097 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242" Namespace="calico-system" Pod="whisker-68fb5d8b94-bxzkg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--68fb5d8b94--bxzkg-eth0" Sep 13 00:55:34.117601 env[1672]: time="2025-09-13T00:55:34.117411170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:34.117601 env[1672]: time="2025-09-13T00:55:34.117504543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:34.117601 env[1672]: time="2025-09-13T00:55:34.117536171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:34.118023 env[1672]: time="2025-09-13T00:55:34.117907237Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242 pid=4462 runtime=io.containerd.runc.v2 Sep 13 00:55:34.150171 env[1672]: time="2025-09-13T00:55:34.150144377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68fb5d8b94-bxzkg,Uid:57717e74-1cc1-4207-9a9f-7cd1f2784e12,Namespace:calico-system,Attempt:0,} returns sandbox id \"5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242\"" Sep 13 00:55:34.150851 env[1672]: time="2025-09-13T00:55:34.150838598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:55:34.920705 kubelet[2677]: I0913 00:55:34.920641 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4678133-1f4f-4e9d-a4f3-00e8af69f3bc" path="/var/lib/kubelet/pods/a4678133-1f4f-4e9d-a4f3-00e8af69f3bc/volumes" Sep 13 00:55:35.678526 systemd-networkd[1410]: cali682e8de6bca: Gained IPv6LL Sep 13 00:55:35.762662 env[1672]: time="2025-09-13T00:55:35.762609224Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:35.763199 env[1672]: time="2025-09-13T00:55:35.763183682Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:35.763852 env[1672]: time="2025-09-13T00:55:35.763813818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:35.764793 env[1672]: time="2025-09-13T00:55:35.764781485Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:35.764994 env[1672]: time="2025-09-13T00:55:35.764979833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:55:35.766193 env[1672]: time="2025-09-13T00:55:35.766169307Z" level=info msg="CreateContainer within sandbox \"5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:55:35.769958 env[1672]: time="2025-09-13T00:55:35.769940354Z" level=info msg="CreateContainer within sandbox \"5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b7fac9ebaa859de559ae4ca3aef5864ce8bc25a8a4cb91c94edfd50441a81fb1\"" Sep 13 00:55:35.770253 env[1672]: time="2025-09-13T00:55:35.770240538Z" level=info msg="StartContainer for \"b7fac9ebaa859de559ae4ca3aef5864ce8bc25a8a4cb91c94edfd50441a81fb1\"" Sep 13 00:55:35.803540 env[1672]: time="2025-09-13T00:55:35.803516198Z" level=info msg="StartContainer for \"b7fac9ebaa859de559ae4ca3aef5864ce8bc25a8a4cb91c94edfd50441a81fb1\" returns successfully" Sep 13 00:55:35.804091 env[1672]: time="2025-09-13T00:55:35.804078663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:55:37.916481 env[1672]: time="2025-09-13T00:55:37.916437892Z" level=info msg="StopPodSandbox for \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\"" Sep 13 00:55:37.916481 env[1672]: time="2025-09-13T00:55:37.916437798Z" level=info msg="StopPodSandbox for \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\"" Sep 13 00:55:37.916933 env[1672]: time="2025-09-13T00:55:37.916534917Z" level=info msg="StopPodSandbox for \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\"" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.944 [INFO][4793] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.944 [INFO][4793] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" iface="eth0" netns="/var/run/netns/cni-4f99b88a-ae7b-2b72-863a-cc742e449b33" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.944 [INFO][4793] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" iface="eth0" netns="/var/run/netns/cni-4f99b88a-ae7b-2b72-863a-cc742e449b33" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.944 [INFO][4793] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" iface="eth0" netns="/var/run/netns/cni-4f99b88a-ae7b-2b72-863a-cc742e449b33" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.944 [INFO][4793] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.944 [INFO][4793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.953 [INFO][4844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" HandleID="k8s-pod-network.3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.953 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.953 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.957 [WARNING][4844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" HandleID="k8s-pod-network.3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.957 [INFO][4844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" HandleID="k8s-pod-network.3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.958 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:37.960117 env[1672]: 2025-09-13 00:55:37.959 [INFO][4793] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:37.960579 env[1672]: time="2025-09-13T00:55:37.960204983Z" level=info msg="TearDown network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\" successfully" Sep 13 00:55:37.960579 env[1672]: time="2025-09-13T00:55:37.960233155Z" level=info msg="StopPodSandbox for \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\" returns successfully" Sep 13 00:55:37.960648 env[1672]: time="2025-09-13T00:55:37.960603103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzpbb,Uid:ab1eff7f-6190-416d-98ad-c67415ecaa0b,Namespace:kube-system,Attempt:1,}" Sep 13 00:55:37.962065 systemd[1]: run-netns-cni\x2d4f99b88a\x2dae7b\x2d2b72\x2d863a\x2dcc742e449b33.mount: Deactivated successfully. Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.943 [INFO][4794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.944 [INFO][4794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" iface="eth0" netns="/var/run/netns/cni-d5db3708-8c5c-4b65-cd0e-35183cd9f957" Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.944 [INFO][4794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" iface="eth0" netns="/var/run/netns/cni-d5db3708-8c5c-4b65-cd0e-35183cd9f957" Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.944 [INFO][4794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" iface="eth0" netns="/var/run/netns/cni-d5db3708-8c5c-4b65-cd0e-35183cd9f957" Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.944 [INFO][4794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.944 [INFO][4794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.953 [INFO][4842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" HandleID="k8s-pod-network.c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.953 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.958 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.962 [WARNING][4842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" HandleID="k8s-pod-network.c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.962 [INFO][4842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" HandleID="k8s-pod-network.c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.963 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:37.964652 env[1672]: 2025-09-13 00:55:37.963 [INFO][4794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:37.965050 env[1672]: time="2025-09-13T00:55:37.964684302Z" level=info msg="TearDown network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\" successfully" Sep 13 00:55:37.965050 env[1672]: time="2025-09-13T00:55:37.964700635Z" level=info msg="StopPodSandbox for \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\" returns successfully" Sep 13 00:55:37.965050 env[1672]: time="2025-09-13T00:55:37.965003976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddd77667-rhh4p,Uid:afe91dfa-20ea-43ed-b9ae-2f363b41f123,Namespace:calico-system,Attempt:1,}" Sep 13 00:55:37.966346 systemd[1]: run-netns-cni\x2dd5db3708\x2d8c5c\x2d4b65\x2dcd0e\x2d35183cd9f957.mount: Deactivated successfully. Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.943 [INFO][4795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.943 [INFO][4795] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" iface="eth0" netns="/var/run/netns/cni-414e8b42-88df-1bdc-ffba-fba441b9a1ac" Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.943 [INFO][4795] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" iface="eth0" netns="/var/run/netns/cni-414e8b42-88df-1bdc-ffba-fba441b9a1ac" Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.943 [INFO][4795] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" iface="eth0" netns="/var/run/netns/cni-414e8b42-88df-1bdc-ffba-fba441b9a1ac" Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.943 [INFO][4795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.943 [INFO][4795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.953 [INFO][4840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" HandleID="k8s-pod-network.5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.953 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.963 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.966 [WARNING][4840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" HandleID="k8s-pod-network.5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.966 [INFO][4840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" HandleID="k8s-pod-network.5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.967 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:37.968915 env[1672]: 2025-09-13 00:55:37.968 [INFO][4795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:37.969311 env[1672]: time="2025-09-13T00:55:37.968941573Z" level=info msg="TearDown network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\" successfully" Sep 13 00:55:37.969311 env[1672]: time="2025-09-13T00:55:37.968955005Z" level=info msg="StopPodSandbox for \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\" returns successfully" Sep 13 00:55:37.969311 env[1672]: time="2025-09-13T00:55:37.969222841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-mk7xg,Uid:7aefc875-a5a0-4dd2-a7a7-adf706fc5036,Namespace:calico-system,Attempt:1,}" Sep 13 00:55:37.970567 systemd[1]: run-netns-cni\x2d414e8b42\x2d88df\x2d1bdc\x2dffba\x2dfba441b9a1ac.mount: Deactivated successfully. Sep 13 00:55:38.054772 systemd-networkd[1410]: calid74fc2f57dd: Link UP Sep 13 00:55:38.108747 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:55:38.108809 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid74fc2f57dd: link becomes ready Sep 13 00:55:38.108798 systemd-networkd[1410]: calid74fc2f57dd: Gained carrier Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.012 [INFO][4895] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.019 [INFO][4895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0 goldmane-7988f88666- calico-system 7aefc875-a5a0-4dd2-a7a7-adf706fc5036 921 0 2025-09-13 00:55:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-d04f0c45dd goldmane-7988f88666-mk7xg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid74fc2f57dd [] [] }} ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Namespace="calico-system" Pod="goldmane-7988f88666-mk7xg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.019 [INFO][4895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Namespace="calico-system" Pod="goldmane-7988f88666-mk7xg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.031 [INFO][4953] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" HandleID="k8s-pod-network.6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.031 [INFO][4953] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" HandleID="k8s-pod-network.6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e2440), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-d04f0c45dd", "pod":"goldmane-7988f88666-mk7xg", "timestamp":"2025-09-13 00:55:38.031816974 +0000 UTC"}, Hostname:"ci-3510.3.8-n-d04f0c45dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.031 [INFO][4953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.031 [INFO][4953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.032 [INFO][4953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-d04f0c45dd' Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.036 [INFO][4953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.040 [INFO][4953] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.043 [INFO][4953] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.044 [INFO][4953] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.046 [INFO][4953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.046 [INFO][4953] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.047 [INFO][4953] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895 Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.050 [INFO][4953] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.052 [INFO][4953] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.2/26] block=192.168.13.0/26 handle="k8s-pod-network.6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.052 [INFO][4953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.2/26] handle="k8s-pod-network.6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.052 [INFO][4953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:38.115587 env[1672]: 2025-09-13 00:55:38.052 [INFO][4953] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.2/26] IPv6=[] ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" HandleID="k8s-pod-network.6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:38.116020 env[1672]: 2025-09-13 00:55:38.053 [INFO][4895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Namespace="calico-system" Pod="goldmane-7988f88666-mk7xg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7aefc875-a5a0-4dd2-a7a7-adf706fc5036", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"", Pod:"goldmane-7988f88666-mk7xg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid74fc2f57dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:38.116020 env[1672]: 2025-09-13 00:55:38.053 [INFO][4895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.2/32] ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Namespace="calico-system" Pod="goldmane-7988f88666-mk7xg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:38.116020 env[1672]: 2025-09-13 00:55:38.053 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid74fc2f57dd ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Namespace="calico-system" Pod="goldmane-7988f88666-mk7xg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:38.116020 env[1672]: 2025-09-13 00:55:38.108 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Namespace="calico-system" Pod="goldmane-7988f88666-mk7xg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:38.116020 env[1672]: 2025-09-13 00:55:38.109 [INFO][4895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Namespace="calico-system" Pod="goldmane-7988f88666-mk7xg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7aefc875-a5a0-4dd2-a7a7-adf706fc5036", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895", Pod:"goldmane-7988f88666-mk7xg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid74fc2f57dd", MAC:"0a:15:d4:d4:d2:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:38.116020 env[1672]: 2025-09-13 00:55:38.114 [INFO][4895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895" Namespace="calico-system" Pod="goldmane-7988f88666-mk7xg" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:38.120646 env[1672]: time="2025-09-13T00:55:38.120585133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:38.120646 env[1672]: time="2025-09-13T00:55:38.120606909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:38.120646 env[1672]: time="2025-09-13T00:55:38.120613696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:38.120741 env[1672]: time="2025-09-13T00:55:38.120674749Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895 pid=5005 runtime=io.containerd.runc.v2 Sep 13 00:55:38.148348 env[1672]: time="2025-09-13T00:55:38.148312408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-mk7xg,Uid:7aefc875-a5a0-4dd2-a7a7-adf706fc5036,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895\"" Sep 13 00:55:38.157384 systemd-networkd[1410]: calied561b82b78: Link UP Sep 13 00:55:38.184368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calied561b82b78: link becomes ready Sep 13 00:55:38.184606 systemd-networkd[1410]: calied561b82b78: Gained carrier Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.012 [INFO][4905] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.019 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0 calico-kube-controllers-fddd77667- calico-system afe91dfa-20ea-43ed-b9ae-2f363b41f123 922 0 2025-09-13 00:55:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fddd77667 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-d04f0c45dd calico-kube-controllers-fddd77667-rhh4p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calied561b82b78 [] [] }} ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Namespace="calico-system" Pod="calico-kube-controllers-fddd77667-rhh4p" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.019 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Namespace="calico-system" Pod="calico-kube-controllers-fddd77667-rhh4p" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.032 [INFO][4955] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" HandleID="k8s-pod-network.2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.032 [INFO][4955] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" HandleID="k8s-pod-network.2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345c40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-d04f0c45dd", "pod":"calico-kube-controllers-fddd77667-rhh4p", "timestamp":"2025-09-13 00:55:38.032168546 +0000 UTC"}, Hostname:"ci-3510.3.8-n-d04f0c45dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.032 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.052 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.052 [INFO][4955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-d04f0c45dd' Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.137 [INFO][4955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.140 [INFO][4955] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.143 [INFO][4955] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.144 [INFO][4955] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.146 [INFO][4955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.146 [INFO][4955] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.148 [INFO][4955] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.150 [INFO][4955] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.153 [INFO][4955] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.3/26] block=192.168.13.0/26 handle="k8s-pod-network.2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.153 [INFO][4955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.3/26] handle="k8s-pod-network.2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.153 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:38.191066 env[1672]: 2025-09-13 00:55:38.153 [INFO][4955] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.3/26] IPv6=[] ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" HandleID="k8s-pod-network.2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:38.191514 env[1672]: 2025-09-13 00:55:38.156 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Namespace="calico-system" Pod="calico-kube-controllers-fddd77667-rhh4p" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0", GenerateName:"calico-kube-controllers-fddd77667-", Namespace:"calico-system", SelfLink:"", UID:"afe91dfa-20ea-43ed-b9ae-2f363b41f123", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fddd77667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"", Pod:"calico-kube-controllers-fddd77667-rhh4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied561b82b78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:38.191514 env[1672]: 2025-09-13 00:55:38.156 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.3/32] ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Namespace="calico-system" Pod="calico-kube-controllers-fddd77667-rhh4p" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:38.191514 env[1672]: 2025-09-13 00:55:38.156 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied561b82b78 ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Namespace="calico-system" Pod="calico-kube-controllers-fddd77667-rhh4p" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:38.191514 env[1672]: 2025-09-13 00:55:38.184 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Namespace="calico-system" Pod="calico-kube-controllers-fddd77667-rhh4p" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:38.191514 env[1672]: 2025-09-13 00:55:38.184 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Namespace="calico-system" Pod="calico-kube-controllers-fddd77667-rhh4p" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0", GenerateName:"calico-kube-controllers-fddd77667-", Namespace:"calico-system", SelfLink:"", UID:"afe91dfa-20ea-43ed-b9ae-2f363b41f123", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fddd77667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc", Pod:"calico-kube-controllers-fddd77667-rhh4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied561b82b78", MAC:"76:ee:10:19:bb:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:38.191514 env[1672]: 2025-09-13 00:55:38.190 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc" Namespace="calico-system" Pod="calico-kube-controllers-fddd77667-rhh4p" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:38.195412 env[1672]: time="2025-09-13T00:55:38.195370087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:38.195412 env[1672]: time="2025-09-13T00:55:38.195403924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:38.195412 env[1672]: time="2025-09-13T00:55:38.195411049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:38.195579 env[1672]: time="2025-09-13T00:55:38.195526743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc pid=5056 runtime=io.containerd.runc.v2 Sep 13 00:55:38.222459 env[1672]: time="2025-09-13T00:55:38.222430556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fddd77667-rhh4p,Uid:afe91dfa-20ea-43ed-b9ae-2f363b41f123,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc\"" Sep 13 00:55:38.255690 systemd-networkd[1410]: calibbec6312a20: Link UP Sep 13 00:55:38.282369 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibbec6312a20: link becomes ready Sep 13 00:55:38.282414 systemd-networkd[1410]: calibbec6312a20: Gained carrier Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.011 [INFO][4888] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.018 [INFO][4888] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0 coredns-7c65d6cfc9- kube-system ab1eff7f-6190-416d-98ad-c67415ecaa0b 920 0 2025-09-13 00:55:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-d04f0c45dd coredns-7c65d6cfc9-bzpbb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibbec6312a20 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzpbb" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.018 [INFO][4888] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzpbb" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.033 [INFO][4951] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" HandleID="k8s-pod-network.8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.033 [INFO][4951] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" HandleID="k8s-pod-network.8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e9940), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-d04f0c45dd", "pod":"coredns-7c65d6cfc9-bzpbb", "timestamp":"2025-09-13 00:55:38.033814906 +0000 UTC"}, Hostname:"ci-3510.3.8-n-d04f0c45dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.033 [INFO][4951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.153 [INFO][4951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.154 [INFO][4951] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-d04f0c45dd' Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.237 [INFO][4951] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.241 [INFO][4951] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.244 [INFO][4951] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.245 [INFO][4951] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.247 [INFO][4951] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.247 [INFO][4951] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.248 [INFO][4951] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116 Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.250 [INFO][4951] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.253 [INFO][4951] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.4/26] block=192.168.13.0/26 handle="k8s-pod-network.8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.253 [INFO][4951] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.4/26] handle="k8s-pod-network.8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.253 [INFO][4951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:38.304935 env[1672]: 2025-09-13 00:55:38.253 [INFO][4951] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.4/26] IPv6=[] ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" HandleID="k8s-pod-network.8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:38.305376 env[1672]: 2025-09-13 00:55:38.254 [INFO][4888] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzpbb" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ab1eff7f-6190-416d-98ad-c67415ecaa0b", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"", Pod:"coredns-7c65d6cfc9-bzpbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbec6312a20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:38.305376 env[1672]: 2025-09-13 00:55:38.254 [INFO][4888] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.4/32] ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzpbb" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:38.305376 env[1672]: 2025-09-13 00:55:38.254 [INFO][4888] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbec6312a20 ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzpbb" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:38.305376 env[1672]: 2025-09-13 00:55:38.282 [INFO][4888] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzpbb" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:38.305376 env[1672]: 2025-09-13 00:55:38.282 [INFO][4888] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzpbb" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ab1eff7f-6190-416d-98ad-c67415ecaa0b", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116", Pod:"coredns-7c65d6cfc9-bzpbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbec6312a20", MAC:"72:f9:7e:3f:9a:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:38.305376 env[1672]: 2025-09-13 00:55:38.303 [INFO][4888] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bzpbb" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:38.309561 env[1672]: time="2025-09-13T00:55:38.309528236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:38.309561 env[1672]: time="2025-09-13T00:55:38.309549058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:38.309561 env[1672]: time="2025-09-13T00:55:38.309555885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:38.309682 env[1672]: time="2025-09-13T00:55:38.309622107Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116 pid=5100 runtime=io.containerd.runc.v2 Sep 13 00:55:38.336385 env[1672]: time="2025-09-13T00:55:38.336358237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bzpbb,Uid:ab1eff7f-6190-416d-98ad-c67415ecaa0b,Namespace:kube-system,Attempt:1,} returns sandbox id \"8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116\"" Sep 13 00:55:38.337489 env[1672]: time="2025-09-13T00:55:38.337471486Z" level=info msg="CreateContainer within sandbox \"8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:55:38.341486 env[1672]: time="2025-09-13T00:55:38.341437725Z" level=info msg="CreateContainer within sandbox \"8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ee353701061f5829b72e4b9472463e29a3ead3fe16a6b9f57774a40b9374f6c\"" Sep 13 00:55:38.341615 env[1672]: time="2025-09-13T00:55:38.341602396Z" level=info msg="StartContainer for \"7ee353701061f5829b72e4b9472463e29a3ead3fe16a6b9f57774a40b9374f6c\"" Sep 13 00:55:38.361495 env[1672]: time="2025-09-13T00:55:38.361469991Z" level=info msg="StartContainer for \"7ee353701061f5829b72e4b9472463e29a3ead3fe16a6b9f57774a40b9374f6c\" returns successfully" Sep 13 00:55:38.495302 env[1672]: time="2025-09-13T00:55:38.495220377Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:38.495814 env[1672]: time="2025-09-13T00:55:38.495802326Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:38.496613 env[1672]: time="2025-09-13T00:55:38.496571055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:38.497299 env[1672]: time="2025-09-13T00:55:38.497259153Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:38.497909 env[1672]: time="2025-09-13T00:55:38.497872898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:55:38.498417 env[1672]: time="2025-09-13T00:55:38.498401107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:55:38.498918 env[1672]: time="2025-09-13T00:55:38.498875257Z" level=info msg="CreateContainer within sandbox \"5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:55:38.502473 env[1672]: time="2025-09-13T00:55:38.502425161Z" level=info msg="CreateContainer within sandbox \"5be1e465481032c2272ec0eaebe96e08257358030f8b03f387dc9c38d84e2242\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"82514766d5c9034f4fa84810a31304a54c450fd2246d694aba890f54b2bea61b\"" Sep 13 00:55:38.502644 env[1672]: time="2025-09-13T00:55:38.502629257Z" level=info msg="StartContainer for \"82514766d5c9034f4fa84810a31304a54c450fd2246d694aba890f54b2bea61b\"" Sep 13 00:55:38.543432 env[1672]: time="2025-09-13T00:55:38.543404331Z" level=info msg="StartContainer for \"82514766d5c9034f4fa84810a31304a54c450fd2246d694aba890f54b2bea61b\" returns successfully" Sep 13 00:55:38.916027 env[1672]: time="2025-09-13T00:55:38.915897901Z" level=info msg="StopPodSandbox for \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\"" Sep 13 00:55:38.975019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438072326.mount: Deactivated successfully. Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:38.983 [INFO][5295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:38.983 [INFO][5295] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" iface="eth0" netns="/var/run/netns/cni-6b2bf889-1a90-7a55-9d3a-a7e8a320eb92" Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:38.983 [INFO][5295] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" iface="eth0" netns="/var/run/netns/cni-6b2bf889-1a90-7a55-9d3a-a7e8a320eb92" Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:38.983 [INFO][5295] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" iface="eth0" netns="/var/run/netns/cni-6b2bf889-1a90-7a55-9d3a-a7e8a320eb92" Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:38.983 [INFO][5295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:38.984 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:39.008 [INFO][5314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" HandleID="k8s-pod-network.007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:39.008 [INFO][5314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:39.008 [INFO][5314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:39.016 [WARNING][5314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" HandleID="k8s-pod-network.007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:39.016 [INFO][5314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" HandleID="k8s-pod-network.007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:39.018 [INFO][5314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:39.021939 env[1672]: 2025-09-13 00:55:39.020 [INFO][5295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:39.022998 env[1672]: time="2025-09-13T00:55:39.022051318Z" level=info msg="TearDown network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\" successfully" Sep 13 00:55:39.022998 env[1672]: time="2025-09-13T00:55:39.022089076Z" level=info msg="StopPodSandbox for \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\" returns successfully" Sep 13 00:55:39.022998 env[1672]: time="2025-09-13T00:55:39.022875374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cc844975-jtt5x,Uid:75fdc49e-31b1-401f-8cb1-69f2cb356414,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:55:39.026528 systemd[1]: run-netns-cni\x2d6b2bf889\x2d1a90\x2d7a55\x2d9d3a\x2da7e8a320eb92.mount: Deactivated successfully. Sep 13 00:55:39.047452 kubelet[2677]: I0913 00:55:39.047379 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-68fb5d8b94-bxzkg" podStartSLOduration=1.69979369 podStartE2EDuration="6.047344623s" podCreationTimestamp="2025-09-13 00:55:33 +0000 UTC" firstStartedPulling="2025-09-13 00:55:34.150700196 +0000 UTC m=+37.311818330" lastFinishedPulling="2025-09-13 00:55:38.498251125 +0000 UTC m=+41.659369263" observedRunningTime="2025-09-13 00:55:39.047335542 +0000 UTC m=+42.208453698" watchObservedRunningTime="2025-09-13 00:55:39.047344623 +0000 UTC m=+42.208462778" Sep 13 00:55:39.054000 audit[5352]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=5352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:39.077645 kubelet[2677]: I0913 00:55:39.077592 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bzpbb" podStartSLOduration=36.077571222 podStartE2EDuration="36.077571222s" podCreationTimestamp="2025-09-13 00:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:55:39.076834029 +0000 UTC m=+42.237952194" watchObservedRunningTime="2025-09-13 00:55:39.077571222 +0000 UTC m=+42.238689370" Sep 13 00:55:39.080477 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 13 00:55:39.080571 kernel: audit: type=1325 audit(1757724939.054:281): table=filter:97 family=2 entries=21 op=nft_register_rule pid=5352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:39.054000 audit[5352]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffde98f11a0 a2=0 a3=7ffde98f118c items=0 ppid=2858 pid=5352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:39.135427 kernel: audit: type=1300 audit(1757724939.054:281): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffde98f11a0 a2=0 a3=7ffde98f118c items=0 ppid=2858 pid=5352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:39.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:39.277436 kernel: audit: type=1327 audit(1757724939.054:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:39.278000 audit[5352]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=5352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:39.290992 systemd-networkd[1410]: calic1439d37418: Link UP Sep 13 00:55:39.278000 audit[5352]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffde98f11a0 a2=0 a3=7ffde98f118c items=0 ppid=2858 pid=5352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:39.358493 kernel: audit: type=1325 audit(1757724939.278:282): table=nat:98 family=2 entries=19 op=nft_register_chain pid=5352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:39.358522 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:55:39.358535 kernel: audit: type=1300 audit(1757724939.278:282): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffde98f11a0 a2=0 a3=7ffde98f118c items=0 ppid=2858 pid=5352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:39.358547 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic1439d37418: link becomes ready Sep 13 00:55:39.447760 kernel: audit: type=1327 audit(1757724939.278:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:39.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:39.450000 audit[5373]: NETFILTER_CFG table=filter:99 family=2 entries=17 op=nft_register_rule pid=5373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:39.473997 systemd-networkd[1410]: calic1439d37418: Gained carrier Sep 13 00:55:39.474373 kernel: audit: type=1325 audit(1757724939.450:283): table=filter:99 family=2 entries=17 op=nft_register_rule pid=5373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.054 [INFO][5331] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.080 [INFO][5331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0 calico-apiserver-77cc844975- calico-apiserver 75fdc49e-31b1-401f-8cb1-69f2cb356414 943 0 2025-09-13 00:55:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77cc844975 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-d04f0c45dd calico-apiserver-77cc844975-jtt5x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic1439d37418 [] [] }} ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-jtt5x" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.080 [INFO][5331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-jtt5x" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.104 [INFO][5356] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" HandleID="k8s-pod-network.ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.104 [INFO][5356] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" HandleID="k8s-pod-network.ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e2f10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-d04f0c45dd", "pod":"calico-apiserver-77cc844975-jtt5x", "timestamp":"2025-09-13 00:55:39.104371412 +0000 UTC"}, Hostname:"ci-3510.3.8-n-d04f0c45dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.104 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.104 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.104 [INFO][5356] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-d04f0c45dd' Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.135 [INFO][5356] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.139 [INFO][5356] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.279 [INFO][5356] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.280 [INFO][5356] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.282 [INFO][5356] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.282 [INFO][5356] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.283 [INFO][5356] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1 Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.285 [INFO][5356] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.289 [INFO][5356] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.5/26] block=192.168.13.0/26 handle="k8s-pod-network.ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.289 [INFO][5356] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.5/26] handle="k8s-pod-network.ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.289 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:39.480341 env[1672]: 2025-09-13 00:55:39.289 [INFO][5356] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.5/26] IPv6=[] ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" HandleID="k8s-pod-network.ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.480779 env[1672]: 2025-09-13 00:55:39.290 [INFO][5331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-jtt5x" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0", GenerateName:"calico-apiserver-77cc844975-", Namespace:"calico-apiserver", SelfLink:"", UID:"75fdc49e-31b1-401f-8cb1-69f2cb356414", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cc844975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"", Pod:"calico-apiserver-77cc844975-jtt5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1439d37418", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:39.480779 env[1672]: 2025-09-13 00:55:39.290 [INFO][5331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.5/32] ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-jtt5x" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.480779 env[1672]: 2025-09-13 00:55:39.290 [INFO][5331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1439d37418 ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-jtt5x" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.480779 env[1672]: 2025-09-13 00:55:39.473 [INFO][5331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-jtt5x" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.480779 env[1672]: 2025-09-13 00:55:39.474 [INFO][5331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-jtt5x" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0", GenerateName:"calico-apiserver-77cc844975-", Namespace:"calico-apiserver", SelfLink:"", UID:"75fdc49e-31b1-401f-8cb1-69f2cb356414", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cc844975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1", Pod:"calico-apiserver-77cc844975-jtt5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1439d37418", MAC:"a2:a1:66:8c:c2:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:39.480779 env[1672]: 2025-09-13 00:55:39.479 [INFO][5331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-jtt5x" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:39.484791 env[1672]: time="2025-09-13T00:55:39.484736063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:39.484791 env[1672]: time="2025-09-13T00:55:39.484764358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:39.484791 env[1672]: time="2025-09-13T00:55:39.484774148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:39.484883 env[1672]: time="2025-09-13T00:55:39.484841946Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1 pid=5385 runtime=io.containerd.runc.v2 Sep 13 00:55:39.450000 audit[5373]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdd4ea3390 a2=0 a3=7ffdd4ea337c items=0 ppid=2858 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:39.580412 kernel: audit: type=1300 audit(1757724939.450:283): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdd4ea3390 a2=0 a3=7ffdd4ea337c items=0 ppid=2858 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:39.450000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:39.723132 kernel: audit: type=1327 audit(1757724939.450:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:39.722000 audit[5373]: NETFILTER_CFG table=nat:100 family=2 entries=35 op=nft_register_chain pid=5373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:39.722000 audit[5373]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdd4ea3390 a2=0 a3=7ffdd4ea337c items=0 ppid=2858 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:39.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:39.778378 kernel: audit: type=1325 audit(1757724939.722:284): table=nat:100 family=2 entries=35 op=nft_register_chain pid=5373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:39.784081 env[1672]: time="2025-09-13T00:55:39.784057264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cc844975-jtt5x,Uid:75fdc49e-31b1-401f-8cb1-69f2cb356414,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1\"" Sep 13 00:55:39.839473 systemd-networkd[1410]: calid74fc2f57dd: Gained IPv6LL Sep 13 00:55:39.916935 env[1672]: time="2025-09-13T00:55:39.916850534Z" level=info msg="StopPodSandbox for \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\"" Sep 13 00:55:39.917220 env[1672]: time="2025-09-13T00:55:39.916900299Z" level=info msg="StopPodSandbox for \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\"" Sep 13 00:55:39.917416 env[1672]: time="2025-09-13T00:55:39.917268648Z" level=info msg="StopPodSandbox for \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\"" Sep 13 00:55:39.966443 systemd-networkd[1410]: calied561b82b78: Gained IPv6LL Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.979 [INFO][5500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.979 [INFO][5500] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" iface="eth0" netns="/var/run/netns/cni-d20dc23c-6a07-9a8f-f309-051aaf6b22d8" Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.979 [INFO][5500] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" iface="eth0" netns="/var/run/netns/cni-d20dc23c-6a07-9a8f-f309-051aaf6b22d8" Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.979 [INFO][5500] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" iface="eth0" netns="/var/run/netns/cni-d20dc23c-6a07-9a8f-f309-051aaf6b22d8" Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.979 [INFO][5500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.979 [INFO][5500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.990 [INFO][5554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" HandleID="k8s-pod-network.36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.990 [INFO][5554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.990 [INFO][5554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.994 [WARNING][5554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" HandleID="k8s-pod-network.36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.994 [INFO][5554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" HandleID="k8s-pod-network.36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.995 [INFO][5554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:39.996507 env[1672]: 2025-09-13 00:55:39.995 [INFO][5500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:39.996807 env[1672]: time="2025-09-13T00:55:39.996580506Z" level=info msg="TearDown network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\" successfully" Sep 13 00:55:39.996807 env[1672]: time="2025-09-13T00:55:39.996600666Z" level=info msg="StopPodSandbox for \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\" returns successfully" Sep 13 00:55:39.997028 env[1672]: time="2025-09-13T00:55:39.997014476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cc844975-7r74t,Uid:15b25ee3-c882-4d6d-87fd-8435c4ab9603,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:55:39.999426 systemd[1]: run-netns-cni\x2dd20dc23c\x2d6a07\x2d9a8f\x2df309\x2d051aaf6b22d8.mount: Deactivated successfully. Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.979 [INFO][5501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.979 [INFO][5501] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" iface="eth0" netns="/var/run/netns/cni-41de438c-2fcf-cf16-3343-2d40c2be7a00" Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.980 [INFO][5501] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" iface="eth0" netns="/var/run/netns/cni-41de438c-2fcf-cf16-3343-2d40c2be7a00" Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.980 [INFO][5501] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" iface="eth0" netns="/var/run/netns/cni-41de438c-2fcf-cf16-3343-2d40c2be7a00" Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.980 [INFO][5501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.980 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.990 [INFO][5556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" HandleID="k8s-pod-network.ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.990 [INFO][5556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.995 [INFO][5556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.998 [WARNING][5556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" HandleID="k8s-pod-network.ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.998 [INFO][5556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" HandleID="k8s-pod-network.ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:39.999 [INFO][5556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:40.001273 env[1672]: 2025-09-13 00:55:40.000 [INFO][5501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:40.001597 env[1672]: time="2025-09-13T00:55:40.001331971Z" level=info msg="TearDown network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\" successfully" Sep 13 00:55:40.001597 env[1672]: time="2025-09-13T00:55:40.001355012Z" level=info msg="StopPodSandbox for \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\" returns successfully" Sep 13 00:55:40.001735 env[1672]: time="2025-09-13T00:55:40.001718119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzrs8,Uid:76f0a7cf-aca7-4535-904d-665ae5104c51,Namespace:calico-system,Attempt:1,}" Sep 13 00:55:40.005077 systemd[1]: run-netns-cni\x2d41de438c\x2d2fcf\x2dcf16\x2d3343\x2d2d40c2be7a00.mount: Deactivated successfully. Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:39.976 [INFO][5502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:39.976 [INFO][5502] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" iface="eth0" netns="/var/run/netns/cni-b1e34b00-e080-610f-3263-7d82593e2c68" Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:39.976 [INFO][5502] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" iface="eth0" netns="/var/run/netns/cni-b1e34b00-e080-610f-3263-7d82593e2c68" Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:39.977 [INFO][5502] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" iface="eth0" netns="/var/run/netns/cni-b1e34b00-e080-610f-3263-7d82593e2c68" Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:39.977 [INFO][5502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:39.977 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:39.991 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" HandleID="k8s-pod-network.2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:39.991 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:39.999 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:40.003 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" HandleID="k8s-pod-network.2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:40.003 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" HandleID="k8s-pod-network.2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:40.004 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:40.005780 env[1672]: 2025-09-13 00:55:40.004 [INFO][5502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:40.006087 env[1672]: time="2025-09-13T00:55:40.005858324Z" level=info msg="TearDown network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\" successfully" Sep 13 00:55:40.006087 env[1672]: time="2025-09-13T00:55:40.005876222Z" level=info msg="StopPodSandbox for \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\" returns successfully" Sep 13 00:55:40.006310 env[1672]: time="2025-09-13T00:55:40.006295985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ht5gv,Uid:1ce64396-8b92-4683-bf8f-d8bcb3fc6a06,Namespace:kube-system,Attempt:1,}" Sep 13 00:55:40.008293 systemd[1]: run-netns-cni\x2db1e34b00\x2de080\x2d610f\x2d3263\x2d7d82593e2c68.mount: Deactivated successfully. Sep 13 00:55:40.059344 systemd-networkd[1410]: calia674898eb82: Link UP Sep 13 00:55:40.085950 systemd-networkd[1410]: calia674898eb82: Gained carrier Sep 13 00:55:40.086369 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia674898eb82: link becomes ready Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.013 [INFO][5593] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.021 [INFO][5593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0 calico-apiserver-77cc844975- calico-apiserver 15b25ee3-c882-4d6d-87fd-8435c4ab9603 963 0 2025-09-13 00:55:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77cc844975 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-d04f0c45dd calico-apiserver-77cc844975-7r74t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia674898eb82 [] [] }} ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-7r74t" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.021 [INFO][5593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-7r74t" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.036 [INFO][5659] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" HandleID="k8s-pod-network.8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.036 [INFO][5659] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" HandleID="k8s-pod-network.8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-d04f0c45dd", "pod":"calico-apiserver-77cc844975-7r74t", "timestamp":"2025-09-13 00:55:40.036056875 +0000 UTC"}, Hostname:"ci-3510.3.8-n-d04f0c45dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.036 [INFO][5659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.036 [INFO][5659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.036 [INFO][5659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-d04f0c45dd' Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.040 [INFO][5659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.043 [INFO][5659] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.046 [INFO][5659] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.047 [INFO][5659] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.050 [INFO][5659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.050 [INFO][5659] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.051 [INFO][5659] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4 Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.054 [INFO][5659] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.057 [INFO][5659] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.6/26] block=192.168.13.0/26 handle="k8s-pod-network.8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.057 [INFO][5659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.6/26] handle="k8s-pod-network.8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.057 [INFO][5659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:40.092864 env[1672]: 2025-09-13 00:55:40.057 [INFO][5659] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.6/26] IPv6=[] ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" HandleID="k8s-pod-network.8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:40.093454 env[1672]: 2025-09-13 00:55:40.058 [INFO][5593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-7r74t" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0", GenerateName:"calico-apiserver-77cc844975-", Namespace:"calico-apiserver", SelfLink:"", UID:"15b25ee3-c882-4d6d-87fd-8435c4ab9603", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cc844975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"", Pod:"calico-apiserver-77cc844975-7r74t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia674898eb82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:40.093454 env[1672]: 2025-09-13 00:55:40.058 [INFO][5593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.6/32] ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-7r74t" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:40.093454 env[1672]: 2025-09-13 00:55:40.058 [INFO][5593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia674898eb82 ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-7r74t" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:40.093454 env[1672]: 2025-09-13 00:55:40.085 [INFO][5593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-7r74t" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:40.093454 env[1672]: 2025-09-13 00:55:40.086 [INFO][5593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-7r74t" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0", GenerateName:"calico-apiserver-77cc844975-", Namespace:"calico-apiserver", SelfLink:"", UID:"15b25ee3-c882-4d6d-87fd-8435c4ab9603", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cc844975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4", Pod:"calico-apiserver-77cc844975-7r74t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia674898eb82", MAC:"7e:5e:dc:04:b7:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:40.093454 env[1672]: 2025-09-13 00:55:40.091 [INFO][5593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4" Namespace="calico-apiserver" Pod="calico-apiserver-77cc844975-7r74t" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:40.094458 systemd-networkd[1410]: calibbec6312a20: Gained IPv6LL Sep 13 00:55:40.099675 env[1672]: time="2025-09-13T00:55:40.097386559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:40.099675 env[1672]: time="2025-09-13T00:55:40.097407832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:40.099675 env[1672]: time="2025-09-13T00:55:40.097414650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:40.099675 env[1672]: time="2025-09-13T00:55:40.097483768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4 pid=5714 runtime=io.containerd.runc.v2 Sep 13 00:55:40.124436 env[1672]: time="2025-09-13T00:55:40.124377473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77cc844975-7r74t,Uid:15b25ee3-c882-4d6d-87fd-8435c4ab9603,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4\"" Sep 13 00:55:40.162194 systemd-networkd[1410]: cali1e628326281: Link UP Sep 13 00:55:40.188749 systemd-networkd[1410]: cali1e628326281: Gained carrier Sep 13 00:55:40.189371 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1e628326281: link becomes ready Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.017 [INFO][5605] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.022 [INFO][5605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0 csi-node-driver- calico-system 76f0a7cf-aca7-4535-904d-665ae5104c51 964 0 2025-09-13 00:55:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-d04f0c45dd csi-node-driver-rzrs8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1e628326281 [] [] }} ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Namespace="calico-system" Pod="csi-node-driver-rzrs8" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.022 [INFO][5605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Namespace="calico-system" Pod="csi-node-driver-rzrs8" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.038 [INFO][5665] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" HandleID="k8s-pod-network.c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.038 [INFO][5665] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" HandleID="k8s-pod-network.c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033e900), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-d04f0c45dd", "pod":"csi-node-driver-rzrs8", "timestamp":"2025-09-13 00:55:40.038303353 +0000 UTC"}, Hostname:"ci-3510.3.8-n-d04f0c45dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.038 [INFO][5665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.057 [INFO][5665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.057 [INFO][5665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-d04f0c45dd' Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.141 [INFO][5665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.144 [INFO][5665] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.147 [INFO][5665] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.149 [INFO][5665] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.152 [INFO][5665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.152 [INFO][5665] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.153 [INFO][5665] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.155 [INFO][5665] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.160 [INFO][5665] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.7/26] block=192.168.13.0/26 handle="k8s-pod-network.c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.160 [INFO][5665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.7/26] handle="k8s-pod-network.c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.160 [INFO][5665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:40.195012 env[1672]: 2025-09-13 00:55:40.160 [INFO][5665] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.7/26] IPv6=[] ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" HandleID="k8s-pod-network.c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.195449 env[1672]: 2025-09-13 00:55:40.161 [INFO][5605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Namespace="calico-system" Pod="csi-node-driver-rzrs8" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76f0a7cf-aca7-4535-904d-665ae5104c51", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"", Pod:"csi-node-driver-rzrs8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e628326281", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:40.195449 env[1672]: 2025-09-13 00:55:40.161 [INFO][5605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.7/32] ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Namespace="calico-system" Pod="csi-node-driver-rzrs8" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.195449 env[1672]: 2025-09-13 00:55:40.161 [INFO][5605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e628326281 ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Namespace="calico-system" Pod="csi-node-driver-rzrs8" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.195449 env[1672]: 2025-09-13 00:55:40.188 [INFO][5605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Namespace="calico-system" Pod="csi-node-driver-rzrs8" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.195449 env[1672]: 2025-09-13 00:55:40.188 [INFO][5605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Namespace="calico-system" Pod="csi-node-driver-rzrs8" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76f0a7cf-aca7-4535-904d-665ae5104c51", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b", Pod:"csi-node-driver-rzrs8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e628326281", MAC:"52:56:e4:d2:32:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:40.195449 env[1672]: 2025-09-13 00:55:40.194 [INFO][5605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b" Namespace="calico-system" Pod="csi-node-driver-rzrs8" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:40.199405 env[1672]: time="2025-09-13T00:55:40.199378698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:40.199405 env[1672]: time="2025-09-13T00:55:40.199398500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:40.199488 env[1672]: time="2025-09-13T00:55:40.199405337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:40.199488 env[1672]: time="2025-09-13T00:55:40.199474788Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b pid=5761 runtime=io.containerd.runc.v2 Sep 13 00:55:40.217712 env[1672]: time="2025-09-13T00:55:40.217688200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzrs8,Uid:76f0a7cf-aca7-4535-904d-665ae5104c51,Namespace:calico-system,Attempt:1,} returns sandbox id \"c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b\"" Sep 13 00:55:40.290812 systemd-networkd[1410]: cali2133985625d: Link UP Sep 13 00:55:40.316845 systemd-networkd[1410]: cali2133985625d: Gained carrier Sep 13 00:55:40.317368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2133985625d: link becomes ready Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.019 [INFO][5623] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.025 [INFO][5623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0 coredns-7c65d6cfc9- kube-system 1ce64396-8b92-4683-bf8f-d8bcb3fc6a06 962 0 2025-09-13 00:55:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-d04f0c45dd coredns-7c65d6cfc9-ht5gv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2133985625d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ht5gv" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.025 [INFO][5623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ht5gv" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.038 [INFO][5671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" HandleID="k8s-pod-network.a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.038 [INFO][5671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" HandleID="k8s-pod-network.a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-d04f0c45dd", "pod":"coredns-7c65d6cfc9-ht5gv", "timestamp":"2025-09-13 00:55:40.038214276 +0000 UTC"}, Hostname:"ci-3510.3.8-n-d04f0c45dd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.038 [INFO][5671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.160 [INFO][5671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.160 [INFO][5671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-d04f0c45dd' Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.243 [INFO][5671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.253 [INFO][5671] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.262 [INFO][5671] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.268 [INFO][5671] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.272 [INFO][5671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.273 [INFO][5671] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.275 [INFO][5671] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.280 [INFO][5671] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.287 [INFO][5671] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.8/26] block=192.168.13.0/26 handle="k8s-pod-network.a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.287 [INFO][5671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.8/26] handle="k8s-pod-network.a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" host="ci-3510.3.8-n-d04f0c45dd" Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.287 [INFO][5671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:40.323581 env[1672]: 2025-09-13 00:55:40.287 [INFO][5671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.8/26] IPv6=[] ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" HandleID="k8s-pod-network.a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.324022 env[1672]: 2025-09-13 00:55:40.289 [INFO][5623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ht5gv" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1ce64396-8b92-4683-bf8f-d8bcb3fc6a06", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"", Pod:"coredns-7c65d6cfc9-ht5gv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2133985625d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:40.324022 env[1672]: 2025-09-13 00:55:40.289 [INFO][5623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.8/32] ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ht5gv" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.324022 env[1672]: 2025-09-13 00:55:40.289 [INFO][5623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2133985625d ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ht5gv" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.324022 env[1672]: 2025-09-13 00:55:40.317 [INFO][5623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ht5gv" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.324022 env[1672]: 2025-09-13 00:55:40.317 [INFO][5623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ht5gv" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1ce64396-8b92-4683-bf8f-d8bcb3fc6a06", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c", Pod:"coredns-7c65d6cfc9-ht5gv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2133985625d", MAC:"4e:cc:3e:28:67:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:40.324022 env[1672]: 2025-09-13 00:55:40.322 [INFO][5623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ht5gv" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:40.328013 env[1672]: time="2025-09-13T00:55:40.327958430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:40.328013 env[1672]: time="2025-09-13T00:55:40.327978152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:40.328013 env[1672]: time="2025-09-13T00:55:40.327984974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:40.328115 env[1672]: time="2025-09-13T00:55:40.328055183Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c pid=5811 runtime=io.containerd.runc.v2 Sep 13 00:55:40.355729 env[1672]: time="2025-09-13T00:55:40.355665694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ht5gv,Uid:1ce64396-8b92-4683-bf8f-d8bcb3fc6a06,Namespace:kube-system,Attempt:1,} returns sandbox id \"a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c\"" Sep 13 00:55:40.356864 env[1672]: time="2025-09-13T00:55:40.356850307Z" level=info msg="CreateContainer within sandbox \"a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:55:40.360997 env[1672]: time="2025-09-13T00:55:40.360955349Z" level=info msg="CreateContainer within sandbox \"a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf6ac3612fdc03f06d067160ddd333662a21b645ed5018a979a0429123b3a0e1\"" Sep 13 00:55:40.361178 env[1672]: time="2025-09-13T00:55:40.361163380Z" level=info msg="StartContainer for \"cf6ac3612fdc03f06d067160ddd333662a21b645ed5018a979a0429123b3a0e1\"" Sep 13 00:55:40.478332 env[1672]: time="2025-09-13T00:55:40.478204075Z" level=info msg="StartContainer for \"cf6ac3612fdc03f06d067160ddd333662a21b645ed5018a979a0429123b3a0e1\" returns successfully" Sep 13 00:55:41.006820 kubelet[2677]: I0913 00:55:41.006799 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:41.008777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133656286.mount: Deactivated successfully. Sep 13 00:55:41.019000 audit[5949]: NETFILTER_CFG table=filter:101 family=2 entries=13 op=nft_register_rule pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:41.019000 audit[5949]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fffc318d600 a2=0 a3=7fffc318d5ec items=0 ppid=2858 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:41.033000 audit[5949]: NETFILTER_CFG table=nat:102 family=2 entries=27 op=nft_register_chain pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:41.033000 audit[5949]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fffc318d600 a2=0 a3=7fffc318d5ec items=0 ppid=2858 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:41.046537 kubelet[2677]: I0913 00:55:41.046505 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ht5gv" podStartSLOduration=38.04649537 podStartE2EDuration="38.04649537s" podCreationTimestamp="2025-09-13 00:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:55:41.046305547 +0000 UTC m=+44.207423692" watchObservedRunningTime="2025-09-13 00:55:41.04649537 +0000 UTC m=+44.207613504" Sep 13 00:55:41.054477 systemd-networkd[1410]: calic1439d37418: Gained IPv6LL Sep 13 00:55:41.053000 audit[5951]: NETFILTER_CFG table=filter:103 family=2 entries=12 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:41.053000 audit[5951]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc4e2baba0 a2=0 a3=7ffc4e2bab8c items=0 ppid=2858 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.053000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:41.063000 audit[5951]: NETFILTER_CFG table=nat:104 family=2 entries=46 op=nft_register_rule pid=5951 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:41.063000 audit[5951]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffc4e2baba0 a2=0 a3=7ffc4e2bab8c items=0 ppid=2858 pid=5951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.063000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:41.478612 env[1672]: time="2025-09-13T00:55:41.478535562Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:41.479143 env[1672]: time="2025-09-13T00:55:41.479101146Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:41.479989 env[1672]: time="2025-09-13T00:55:41.479947729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:41.480731 env[1672]: time="2025-09-13T00:55:41.480691566Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:41.481409 env[1672]: time="2025-09-13T00:55:41.481354356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:55:41.482082 env[1672]: time="2025-09-13T00:55:41.482069339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:55:41.482635 env[1672]: time="2025-09-13T00:55:41.482621519Z" level=info msg="CreateContainer within sandbox \"6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:55:41.486519 env[1672]: time="2025-09-13T00:55:41.486475291Z" level=info msg="CreateContainer within sandbox \"6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1b9e3c955239ba9b0df08b4b6261d89d56a10905844dbb002d1a2d808aecdded\"" Sep 13 00:55:41.486879 env[1672]: time="2025-09-13T00:55:41.486835567Z" level=info msg="StartContainer for \"1b9e3c955239ba9b0df08b4b6261d89d56a10905844dbb002d1a2d808aecdded\"" Sep 13 00:55:41.503516 systemd-networkd[1410]: cali1e628326281: Gained IPv6LL Sep 13 00:55:41.519194 env[1672]: time="2025-09-13T00:55:41.519166207Z" level=info msg="StartContainer for \"1b9e3c955239ba9b0df08b4b6261d89d56a10905844dbb002d1a2d808aecdded\" returns successfully" Sep 13 00:55:41.693963 kubelet[2677]: I0913 00:55:41.693943 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:41.718000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.718000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.718000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.718000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.718000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.718000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.718000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.718000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.718000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.718000 audit: BPF prog-id=10 op=LOAD Sep 13 00:55:41.718000 audit[6080]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4b4fd5c0 a2=98 a3=1fffffffffffffff items=0 ppid=6000 pid=6080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.718000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:55:41.719000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit: BPF prog-id=11 op=LOAD Sep 13 00:55:41.719000 audit[6080]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4b4fd4a0 a2=94 a3=3 items=0 ppid=6000 pid=6080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.719000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:55:41.719000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { bpf } for pid=6080 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit: BPF prog-id=12 op=LOAD Sep 13 00:55:41.719000 audit[6080]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4b4fd4e0 a2=94 a3=7ffc4b4fd6c0 items=0 ppid=6000 pid=6080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.719000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:55:41.719000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:55:41.719000 audit[6080]: AVC avc: denied { perfmon } for pid=6080 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6080]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc4b4fd5b0 a2=50 a3=a000000085 items=0 ppid=6000 pid=6080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.719000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit: BPF prog-id=13 op=LOAD Sep 13 00:55:41.719000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe58f80890 a2=98 a3=3 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.719000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.719000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit: BPF prog-id=14 op=LOAD Sep 13 00:55:41.719000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe58f80680 a2=94 a3=54428f items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.719000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.719000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.719000 audit: BPF prog-id=15 op=LOAD Sep 13 00:55:41.719000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe58f806b0 a2=94 a3=2 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.719000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.720000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit: BPF prog-id=16 op=LOAD Sep 13 00:55:41.810000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe58f80570 a2=94 a3=1 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.810000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.810000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:55:41.810000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.810000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe58f80640 a2=50 a3=7ffe58f80720 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.810000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe58f80580 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe58f805b0 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe58f804c0 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe58f805d0 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe58f805b0 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe58f805a0 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe58f805d0 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe58f805b0 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe58f805d0 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe58f805a0 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe58f80610 a2=28 a3=0 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe58f803c0 a2=50 a3=1 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit: BPF prog-id=17 op=LOAD Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe58f803c0 a2=94 a3=5 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe58f80470 a2=50 a3=1 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.817000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.817000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe58f80590 a2=4 a3=38 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { confidentiality } for pid=6085 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:55:41.818000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe58f805e0 a2=94 a3=6 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.818000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { confidentiality } for pid=6085 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:55:41.818000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe58f7fd90 a2=94 a3=88 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.818000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { perfmon } for pid=6085 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { bpf } for pid=6085 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.818000 audit[6085]: AVC avc: denied { confidentiality } for pid=6085 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:55:41.818000 audit[6085]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe58f7fd90 a2=94 a3=88 items=0 ppid=6000 pid=6085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.818000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit: BPF prog-id=18 op=LOAD Sep 13 00:55:41.822000 audit[6156]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9567e1a0 a2=98 a3=1999999999999999 items=0 ppid=6000 pid=6156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.822000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:55:41.822000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit: BPF prog-id=19 op=LOAD Sep 13 00:55:41.822000 audit[6156]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9567e080 a2=94 a3=ffff items=0 ppid=6000 pid=6156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.822000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:55:41.822000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { perfmon } for pid=6156 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit[6156]: AVC avc: denied { bpf } for pid=6156 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.822000 audit: BPF prog-id=20 op=LOAD Sep 13 00:55:41.822000 audit[6156]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9567e0c0 a2=94 a3=7ffe9567e2a0 items=0 ppid=6000 pid=6156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.822000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Sep 13 00:55:41.822000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:55:41.855773 systemd-networkd[1410]: vxlan.calico: Link UP Sep 13 00:55:41.855778 systemd-networkd[1410]: vxlan.calico: Gained carrier Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit: BPF prog-id=21 op=LOAD Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3efb7cc0 a2=98 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit: BPF prog-id=22 op=LOAD Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3efb7ad0 a2=94 a3=54428f items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit: BPF prog-id=23 op=LOAD Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3efb7b00 a2=94 a3=2 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff3efb79d0 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff3efb7a00 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff3efb7910 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff3efb7a20 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff3efb7a00 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff3efb79f0 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff3efb7a20 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff3efb7a00 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff3efb7a20 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff3efb79f0 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff3efb7a60 a2=28 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.858000 audit: BPF prog-id=24 op=LOAD Sep 13 00:55:41.858000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff3efb78d0 a2=94 a3=0 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.858000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7fff3efb78c0 a2=50 a3=2800 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7fff3efb78c0 a2=50 a3=2800 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit: BPF prog-id=25 op=LOAD Sep 13 00:55:41.859000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff3efb70e0 a2=94 a3=2 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.859000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { perfmon } for pid=6180 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit[6180]: AVC avc: denied { bpf } for pid=6180 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.859000 audit: BPF prog-id=26 op=LOAD Sep 13 00:55:41.859000 audit[6180]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff3efb71e0 a2=94 a3=30 items=0 ppid=6000 pid=6180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit: BPF prog-id=27 op=LOAD Sep 13 00:55:41.860000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc40621a80 a2=98 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.860000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.860000 audit: BPF prog-id=27 op=UNLOAD Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit: BPF prog-id=28 op=LOAD Sep 13 00:55:41.860000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc40621870 a2=94 a3=54428f items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.860000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.860000 audit: BPF prog-id=28 op=UNLOAD Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.860000 audit: BPF prog-id=29 op=LOAD Sep 13 00:55:41.860000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc406218a0 a2=94 a3=2 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.860000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.860000 audit: BPF prog-id=29 op=UNLOAD Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit: BPF prog-id=30 op=LOAD Sep 13 00:55:41.951000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc40621760 a2=94 a3=1 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.951000 audit: BPF prog-id=30 op=UNLOAD Sep 13 00:55:41.951000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.951000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc40621830 a2=50 a3=7ffc40621910 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.951000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc40621770 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc406217a0 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc406216b0 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc406217c0 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc406217a0 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc40621790 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc406217c0 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc406217a0 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc406217c0 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc40621790 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc40621800 a2=28 a3=0 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc406215b0 a2=50 a3=1 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit: BPF prog-id=31 op=LOAD Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc406215b0 a2=94 a3=5 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit: BPF prog-id=31 op=UNLOAD Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc40621660 a2=50 a3=1 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc40621780 a2=4 a3=38 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { confidentiality } for pid=6184 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc406217d0 a2=94 a3=6 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { confidentiality } for pid=6184 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc40620f80 a2=94 a3=88 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { perfmon } for pid=6184 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.958000 audit[6184]: AVC avc: denied { confidentiality } for pid=6184 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Sep 13 00:55:41.958000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc40620f80 a2=94 a3=88 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.958000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.959000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.959000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc406229b0 a2=10 a3=f8f00800 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.959000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.959000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.959000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc40622850 a2=10 a3=3 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.959000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.959000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.959000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc406227f0 a2=10 a3=3 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.959000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.959000 audit[6184]: AVC avc: denied { bpf } for pid=6184 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Sep 13 00:55:41.959000 audit[6184]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc406227f0 a2=10 a3=7 items=0 ppid=6000 pid=6184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:41.959000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Sep 13 00:55:41.976000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:55:42.015435 systemd-networkd[1410]: calia674898eb82: Gained IPv6LL Sep 13 00:55:42.014000 audit[6248]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=6248 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:42.014000 audit[6248]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc854553e0 a2=0 a3=7ffc854553cc items=0 ppid=6000 pid=6248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:42.014000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:42.018000 audit[6247]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=6247 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:42.018000 audit[6247]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe7e890ef0 a2=0 a3=7ffe7e890edc items=0 ppid=6000 pid=6247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:42.018000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:42.021000 audit[6246]: NETFILTER_CFG table=raw:107 family=2 entries=21 op=nft_register_chain pid=6246 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:42.021000 audit[6246]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd814203f0 a2=0 a3=7ffd814203dc items=0 ppid=6000 pid=6246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:42.021000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:42.050840 kubelet[2677]: I0913 00:55:42.050802 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-mk7xg" podStartSLOduration=26.717768998 podStartE2EDuration="30.050788176s" podCreationTimestamp="2025-09-13 00:55:12 +0000 UTC" firstStartedPulling="2025-09-13 00:55:38.148946452 +0000 UTC m=+41.310064586" lastFinishedPulling="2025-09-13 00:55:41.481965626 +0000 UTC m=+44.643083764" observedRunningTime="2025-09-13 00:55:42.050306463 +0000 UTC m=+45.211424610" watchObservedRunningTime="2025-09-13 00:55:42.050788176 +0000 UTC m=+45.211906315" Sep 13 00:55:42.066000 audit[6259]: NETFILTER_CFG table=filter:108 family=2 entries=12 op=nft_register_rule pid=6259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:42.066000 audit[6259]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcfc577ee0 a2=0 a3=7ffcfc577ecc items=0 ppid=2858 pid=6259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:42.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:42.076000 audit[6251]: NETFILTER_CFG table=filter:109 family=2 entries=315 op=nft_register_chain pid=6251 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Sep 13 00:55:42.076000 audit[6251]: SYSCALL arch=c000003e syscall=46 success=yes exit=187764 a0=3 a1=7ffc2554e6f0 a2=0 a3=55f464c2b000 items=0 ppid=6000 pid=6251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:42.076000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Sep 13 00:55:42.083000 audit[6259]: NETFILTER_CFG table=nat:110 family=2 entries=58 op=nft_register_chain pid=6259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:42.083000 audit[6259]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7ffcfc577ee0 a2=0 a3=7ffcfc577ecc items=0 ppid=2858 pid=6259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:42.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:42.270596 systemd-networkd[1410]: cali2133985625d: Gained IPv6LL Sep 13 00:55:43.043951 kubelet[2677]: I0913 00:55:43.043930 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:43.100000 audit[6265]: NETFILTER_CFG table=filter:111 family=2 entries=12 op=nft_register_rule pid=6265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:43.100000 audit[6265]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe39e3bdf0 a2=0 a3=7ffe39e3bddc items=0 ppid=2858 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:43.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:43.113000 audit[6265]: NETFILTER_CFG table=nat:112 family=2 entries=22 op=nft_register_rule pid=6265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:43.113000 audit[6265]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe39e3bdf0 a2=0 a3=7ffe39e3bddc items=0 ppid=2858 pid=6265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:43.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:43.550541 systemd-networkd[1410]: vxlan.calico: Gained IPv6LL Sep 13 00:55:44.487977 env[1672]: time="2025-09-13T00:55:44.487921458Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:44.488544 env[1672]: time="2025-09-13T00:55:44.488505619Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:44.489081 env[1672]: time="2025-09-13T00:55:44.489043774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:44.489710 env[1672]: time="2025-09-13T00:55:44.489684582Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:44.490313 env[1672]: time="2025-09-13T00:55:44.490296273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:55:44.490814 env[1672]: time="2025-09-13T00:55:44.490798044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:55:44.494025 env[1672]: time="2025-09-13T00:55:44.494007363Z" level=info msg="CreateContainer within sandbox \"2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:55:44.519090 env[1672]: time="2025-09-13T00:55:44.518994377Z" level=info msg="CreateContainer within sandbox \"2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1ed7f1426857def00d4a1ba7cb56ffc7ecdbcfc21e7ca038d13497a319a7c34a\"" Sep 13 00:55:44.519803 env[1672]: time="2025-09-13T00:55:44.519708291Z" level=info msg="StartContainer for \"1ed7f1426857def00d4a1ba7cb56ffc7ecdbcfc21e7ca038d13497a319a7c34a\"" Sep 13 00:55:44.561197 env[1672]: time="2025-09-13T00:55:44.561173847Z" level=info msg="StartContainer for \"1ed7f1426857def00d4a1ba7cb56ffc7ecdbcfc21e7ca038d13497a319a7c34a\" returns successfully" Sep 13 00:55:45.072844 kubelet[2677]: I0913 00:55:45.072736 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fddd77667-rhh4p" podStartSLOduration=25.804968396 podStartE2EDuration="32.072699869s" podCreationTimestamp="2025-09-13 00:55:13 +0000 UTC" firstStartedPulling="2025-09-13 00:55:38.222986937 +0000 UTC m=+41.384105072" lastFinishedPulling="2025-09-13 00:55:44.490718411 +0000 UTC m=+47.651836545" observedRunningTime="2025-09-13 00:55:45.071549936 +0000 UTC m=+48.232668150" watchObservedRunningTime="2025-09-13 00:55:45.072699869 +0000 UTC m=+48.233818050" Sep 13 00:55:46.053484 kubelet[2677]: I0913 00:55:46.053468 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:47.588828 env[1672]: time="2025-09-13T00:55:47.588773373Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:47.589414 env[1672]: time="2025-09-13T00:55:47.589366063Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:47.589988 env[1672]: time="2025-09-13T00:55:47.589948541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:47.590628 env[1672]: time="2025-09-13T00:55:47.590589839Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:47.591180 env[1672]: time="2025-09-13T00:55:47.591138784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:55:47.591661 env[1672]: time="2025-09-13T00:55:47.591648199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:55:47.592153 env[1672]: time="2025-09-13T00:55:47.592136619Z" level=info msg="CreateContainer within sandbox \"ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:55:47.595815 env[1672]: time="2025-09-13T00:55:47.595778173Z" level=info msg="CreateContainer within sandbox \"ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cb40ccff379e3516c361aedc316e57c01fcd25296f136c4a6f59977b289d9a58\"" Sep 13 00:55:47.596143 env[1672]: time="2025-09-13T00:55:47.596064972Z" level=info msg="StartContainer for \"cb40ccff379e3516c361aedc316e57c01fcd25296f136c4a6f59977b289d9a58\"" Sep 13 00:55:47.639419 env[1672]: time="2025-09-13T00:55:47.639329803Z" level=info msg="StartContainer for \"cb40ccff379e3516c361aedc316e57c01fcd25296f136c4a6f59977b289d9a58\" returns successfully" Sep 13 00:55:47.973864 env[1672]: time="2025-09-13T00:55:47.973783456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:47.974538 env[1672]: time="2025-09-13T00:55:47.974497961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:47.975073 env[1672]: time="2025-09-13T00:55:47.975023854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:47.975788 env[1672]: time="2025-09-13T00:55:47.975744053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:47.976128 env[1672]: time="2025-09-13T00:55:47.976080562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:55:47.976757 env[1672]: time="2025-09-13T00:55:47.976721166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:55:47.977284 env[1672]: time="2025-09-13T00:55:47.977271769Z" level=info msg="CreateContainer within sandbox \"8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:55:47.980906 env[1672]: time="2025-09-13T00:55:47.980891129Z" level=info msg="CreateContainer within sandbox \"8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9a47396fc4d55acee85023a5f3ebeaf2973b0fd03183d31096e275ec158b1237\"" Sep 13 00:55:47.981232 env[1672]: time="2025-09-13T00:55:47.981195462Z" level=info msg="StartContainer for \"9a47396fc4d55acee85023a5f3ebeaf2973b0fd03183d31096e275ec158b1237\"" Sep 13 00:55:48.013912 env[1672]: time="2025-09-13T00:55:48.013857243Z" level=info msg="StartContainer for \"9a47396fc4d55acee85023a5f3ebeaf2973b0fd03183d31096e275ec158b1237\" returns successfully" Sep 13 00:55:48.063453 kubelet[2677]: I0913 00:55:48.063416 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77cc844975-jtt5x" podStartSLOduration=30.256464579 podStartE2EDuration="38.063405225s" podCreationTimestamp="2025-09-13 00:55:10 +0000 UTC" firstStartedPulling="2025-09-13 00:55:39.784639452 +0000 UTC m=+42.945757591" lastFinishedPulling="2025-09-13 00:55:47.5915801 +0000 UTC m=+50.752698237" observedRunningTime="2025-09-13 00:55:48.062889889 +0000 UTC m=+51.224008027" watchObservedRunningTime="2025-09-13 00:55:48.063405225 +0000 UTC m=+51.224523361" Sep 13 00:55:48.069240 kubelet[2677]: I0913 00:55:48.069207 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77cc844975-7r74t" podStartSLOduration=30.219996388 podStartE2EDuration="38.069194379s" podCreationTimestamp="2025-09-13 00:55:10 +0000 UTC" firstStartedPulling="2025-09-13 00:55:40.12742737 +0000 UTC m=+43.288545504" lastFinishedPulling="2025-09-13 00:55:47.976625358 +0000 UTC m=+51.137743495" observedRunningTime="2025-09-13 00:55:48.068812582 +0000 UTC m=+51.229930725" watchObservedRunningTime="2025-09-13 00:55:48.069194379 +0000 UTC m=+51.230312514" Sep 13 00:55:48.068000 audit[6436]: NETFILTER_CFG table=filter:113 family=2 entries=12 op=nft_register_rule pid=6436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:48.093027 kernel: kauditd_printk_skb: 548 callbacks suppressed Sep 13 00:55:48.093076 kernel: audit: type=1325 audit(1757724948.068:395): table=filter:113 family=2 entries=12 op=nft_register_rule pid=6436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:48.068000 audit[6436]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd7e3205e0 a2=0 a3=7ffd7e3205cc items=0 ppid=2858 pid=6436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:48.233001 kernel: audit: type=1300 audit(1757724948.068:395): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd7e3205e0 a2=0 a3=7ffd7e3205cc items=0 ppid=2858 pid=6436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:48.233075 kernel: audit: type=1327 audit(1757724948.068:395): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:48.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:48.294000 audit[6436]: NETFILTER_CFG table=nat:114 family=2 entries=22 op=nft_register_rule pid=6436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:48.294000 audit[6436]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd7e3205e0 a2=0 a3=7ffd7e3205cc items=0 ppid=2858 pid=6436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:48.436979 kernel: audit: type=1325 audit(1757724948.294:396): table=nat:114 family=2 entries=22 op=nft_register_rule pid=6436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:48.437045 kernel: audit: type=1300 audit(1757724948.294:396): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd7e3205e0 a2=0 a3=7ffd7e3205cc items=0 ppid=2858 pid=6436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:48.437059 kernel: audit: type=1327 audit(1757724948.294:396): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:48.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:48.509000 audit[6439]: NETFILTER_CFG table=filter:115 family=2 entries=12 op=nft_register_rule pid=6439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:48.509000 audit[6439]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc5e113740 a2=0 a3=7ffc5e11372c items=0 ppid=2858 pid=6439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:48.661132 kernel: audit: type=1325 audit(1757724948.509:397): table=filter:115 family=2 entries=12 op=nft_register_rule pid=6439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:48.661198 kernel: audit: type=1300 audit(1757724948.509:397): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc5e113740 a2=0 a3=7ffc5e11372c items=0 ppid=2858 pid=6439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:48.661212 kernel: audit: type=1327 audit(1757724948.509:397): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:48.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:48.733000 audit[6439]: NETFILTER_CFG table=nat:116 family=2 entries=22 op=nft_register_rule pid=6439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:48.733000 audit[6439]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc5e113740 a2=0 a3=7ffc5e11372c items=0 ppid=2858 pid=6439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:48.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:48.792417 kernel: audit: type=1325 audit(1757724948.733:398): table=nat:116 family=2 entries=22 op=nft_register_rule pid=6439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:49.058559 kubelet[2677]: I0913 00:55:49.058485 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:49.058559 kubelet[2677]: I0913 00:55:49.058488 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:49.354809 env[1672]: time="2025-09-13T00:55:49.354678861Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:49.355784 env[1672]: time="2025-09-13T00:55:49.355748016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:49.357608 env[1672]: time="2025-09-13T00:55:49.357576280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:49.359592 env[1672]: time="2025-09-13T00:55:49.359537216Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:49.360105 env[1672]: time="2025-09-13T00:55:49.360074236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:55:49.362135 env[1672]: time="2025-09-13T00:55:49.362103428Z" level=info msg="CreateContainer within sandbox \"c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:55:49.369602 env[1672]: time="2025-09-13T00:55:49.369545102Z" level=info msg="CreateContainer within sandbox \"c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0c68ac2f50e819f6621ebc922abd8a07d451bbe50d79a05ad91b25e215326984\"" Sep 13 00:55:49.369974 env[1672]: time="2025-09-13T00:55:49.369921001Z" level=info msg="StartContainer for \"0c68ac2f50e819f6621ebc922abd8a07d451bbe50d79a05ad91b25e215326984\"" Sep 13 00:55:49.399202 env[1672]: time="2025-09-13T00:55:49.399179350Z" level=info msg="StartContainer for \"0c68ac2f50e819f6621ebc922abd8a07d451bbe50d79a05ad91b25e215326984\" returns successfully" Sep 13 00:55:49.399745 env[1672]: time="2025-09-13T00:55:49.399733053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:55:50.289481 kubelet[2677]: I0913 00:55:50.289344 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:50.350000 audit[6521]: NETFILTER_CFG table=filter:117 family=2 entries=11 op=nft_register_rule pid=6521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:50.350000 audit[6521]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc2ddb51f0 a2=0 a3=7ffc2ddb51dc items=0 ppid=2858 pid=6521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:50.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:50.364000 audit[6521]: NETFILTER_CFG table=nat:118 family=2 entries=29 op=nft_register_chain pid=6521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:50.364000 audit[6521]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffc2ddb51f0 a2=0 a3=7ffc2ddb51dc items=0 ppid=2858 pid=6521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:50.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:50.886255 env[1672]: time="2025-09-13T00:55:50.886230443Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:50.886853 env[1672]: time="2025-09-13T00:55:50.886839035Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:50.887471 env[1672]: time="2025-09-13T00:55:50.887459042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:50.888086 env[1672]: time="2025-09-13T00:55:50.888060948Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:50.888436 env[1672]: time="2025-09-13T00:55:50.888420602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:55:50.889694 env[1672]: time="2025-09-13T00:55:50.889680896Z" level=info msg="CreateContainer within sandbox \"c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:55:50.894108 env[1672]: time="2025-09-13T00:55:50.894066007Z" level=info msg="CreateContainer within sandbox \"c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"44e86e0c072e590262fcf0032537b493423e9af4a4db4665b486a7d63d93ec3d\"" Sep 13 00:55:50.894324 env[1672]: time="2025-09-13T00:55:50.894291161Z" level=info msg="StartContainer for \"44e86e0c072e590262fcf0032537b493423e9af4a4db4665b486a7d63d93ec3d\"" Sep 13 00:55:50.917940 env[1672]: time="2025-09-13T00:55:50.917886079Z" level=info msg="StartContainer for \"44e86e0c072e590262fcf0032537b493423e9af4a4db4665b486a7d63d93ec3d\" returns successfully" Sep 13 00:55:50.965860 kubelet[2677]: I0913 00:55:50.965799 2677 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:55:50.965860 kubelet[2677]: I0913 00:55:50.965870 2677 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:55:51.095554 kubelet[2677]: I0913 00:55:51.095452 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rzrs8" podStartSLOduration=27.424703005 podStartE2EDuration="38.095416448s" podCreationTimestamp="2025-09-13 00:55:13 +0000 UTC" firstStartedPulling="2025-09-13 00:55:40.218255256 +0000 UTC m=+43.379373390" lastFinishedPulling="2025-09-13 00:55:50.888968698 +0000 UTC m=+54.050086833" observedRunningTime="2025-09-13 00:55:51.094508011 +0000 UTC m=+54.255626240" watchObservedRunningTime="2025-09-13 00:55:51.095416448 +0000 UTC m=+54.256534631" Sep 13 00:55:52.467988 kubelet[2677]: I0913 00:55:52.467893 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:52.521000 audit[6583]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=6583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:52.521000 audit[6583]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fff5e90b0d0 a2=0 a3=7fff5e90b0bc items=0 ppid=2858 pid=6583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:52.521000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:52.531000 audit[6583]: NETFILTER_CFG table=nat:120 family=2 entries=31 op=nft_register_chain pid=6583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:55:52.531000 audit[6583]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fff5e90b0d0 a2=0 a3=7fff5e90b0bc items=0 ppid=2858 pid=6583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:55:52.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:55:53.693817 kubelet[2677]: I0913 00:55:53.693796 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:55:56.914885 env[1672]: time="2025-09-13T00:55:56.914765469Z" level=info msg="StopPodSandbox for \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\"" Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.970 [WARNING][6634] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7aefc875-a5a0-4dd2-a7a7-adf706fc5036", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895", Pod:"goldmane-7988f88666-mk7xg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid74fc2f57dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.970 [INFO][6634] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.970 [INFO][6634] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" iface="eth0" netns="" Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.970 [INFO][6634] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.970 [INFO][6634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.987 [INFO][6652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" HandleID="k8s-pod-network.5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.987 [INFO][6652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.987 [INFO][6652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.993 [WARNING][6652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" HandleID="k8s-pod-network.5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.993 [INFO][6652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" HandleID="k8s-pod-network.5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.995 [INFO][6652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:56.997734 env[1672]: 2025-09-13 00:55:56.996 [INFO][6634] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:56.998298 env[1672]: time="2025-09-13T00:55:56.997727339Z" level=info msg="TearDown network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\" successfully" Sep 13 00:55:56.998298 env[1672]: time="2025-09-13T00:55:56.997755981Z" level=info msg="StopPodSandbox for \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\" returns successfully" Sep 13 00:55:56.998298 env[1672]: time="2025-09-13T00:55:56.998213923Z" level=info msg="RemovePodSandbox for \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\"" Sep 13 00:55:56.998298 env[1672]: time="2025-09-13T00:55:56.998242694Z" level=info msg="Forcibly stopping sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\"" Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.026 [WARNING][6679] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"7aefc875-a5a0-4dd2-a7a7-adf706fc5036", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"6d4b29d5db9deae12ae7f964eb5a4217cf465d0bcfa2aacdb77449ee29fff895", Pod:"goldmane-7988f88666-mk7xg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid74fc2f57dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.026 [INFO][6679] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.026 [INFO][6679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" iface="eth0" netns="" Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.026 [INFO][6679] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.026 [INFO][6679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.045 [INFO][6696] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" HandleID="k8s-pod-network.5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.045 [INFO][6696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.045 [INFO][6696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.051 [WARNING][6696] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" HandleID="k8s-pod-network.5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.051 [INFO][6696] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" HandleID="k8s-pod-network.5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-goldmane--7988f88666--mk7xg-eth0" Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.053 [INFO][6696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.056255 env[1672]: 2025-09-13 00:55:57.054 [INFO][6679] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825" Sep 13 00:55:57.056905 env[1672]: time="2025-09-13T00:55:57.056276562Z" level=info msg="TearDown network for sandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\" successfully" Sep 13 00:55:57.058438 env[1672]: time="2025-09-13T00:55:57.058415578Z" level=info msg="RemovePodSandbox \"5b449b6623d1ea8e677eedb09f9bffd36210c5613739557fb1894dc6c3f93825\" returns successfully" Sep 13 00:55:57.058805 env[1672]: time="2025-09-13T00:55:57.058780878Z" level=info msg="StopPodSandbox for \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\"" Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.087 [WARNING][6725] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ab1eff7f-6190-416d-98ad-c67415ecaa0b", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116", Pod:"coredns-7c65d6cfc9-bzpbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbec6312a20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.087 [INFO][6725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.087 [INFO][6725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" iface="eth0" netns="" Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.087 [INFO][6725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.087 [INFO][6725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.103 [INFO][6742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" HandleID="k8s-pod-network.3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.104 [INFO][6742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.104 [INFO][6742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.109 [WARNING][6742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" HandleID="k8s-pod-network.3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.109 [INFO][6742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" HandleID="k8s-pod-network.3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.111 [INFO][6742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.113635 env[1672]: 2025-09-13 00:55:57.112 [INFO][6725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:57.114193 env[1672]: time="2025-09-13T00:55:57.113645838Z" level=info msg="TearDown network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\" successfully" Sep 13 00:55:57.114193 env[1672]: time="2025-09-13T00:55:57.113675218Z" level=info msg="StopPodSandbox for \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\" returns successfully" Sep 13 00:55:57.114193 env[1672]: time="2025-09-13T00:55:57.114033238Z" level=info msg="RemovePodSandbox for \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\"" Sep 13 00:55:57.114193 env[1672]: time="2025-09-13T00:55:57.114064725Z" level=info msg="Forcibly stopping sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\"" Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.145 [WARNING][6767] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ab1eff7f-6190-416d-98ad-c67415ecaa0b", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"8e6756ce3dc530f0eab15468741e1a8413ae16809e7273ed33ec94388e952116", Pod:"coredns-7c65d6cfc9-bzpbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibbec6312a20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.146 [INFO][6767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.146 [INFO][6767] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" iface="eth0" netns="" Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.146 [INFO][6767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.146 [INFO][6767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.187 [INFO][6782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" HandleID="k8s-pod-network.3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.188 [INFO][6782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.188 [INFO][6782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.194 [WARNING][6782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" HandleID="k8s-pod-network.3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.194 [INFO][6782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" HandleID="k8s-pod-network.3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--bzpbb-eth0" Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.196 [INFO][6782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.198761 env[1672]: 2025-09-13 00:55:57.197 [INFO][6767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04" Sep 13 00:55:57.198761 env[1672]: time="2025-09-13T00:55:57.198743747Z" level=info msg="TearDown network for sandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\" successfully" Sep 13 00:55:57.201324 env[1672]: time="2025-09-13T00:55:57.201299719Z" level=info msg="RemovePodSandbox \"3a15188ce56adbdfdfd198f8a57e165ff18e5428976cf89a107871c88536bd04\" returns successfully" Sep 13 00:55:57.201674 env[1672]: time="2025-09-13T00:55:57.201632494Z" level=info msg="StopPodSandbox for \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\"" Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.230 [WARNING][6808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0", GenerateName:"calico-apiserver-77cc844975-", Namespace:"calico-apiserver", SelfLink:"", UID:"75fdc49e-31b1-401f-8cb1-69f2cb356414", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cc844975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1", Pod:"calico-apiserver-77cc844975-jtt5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1439d37418", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.230 [INFO][6808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.230 [INFO][6808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" iface="eth0" netns="" Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.230 [INFO][6808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.230 [INFO][6808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.247 [INFO][6825] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" HandleID="k8s-pod-network.007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.247 [INFO][6825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.247 [INFO][6825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.253 [WARNING][6825] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" HandleID="k8s-pod-network.007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.253 [INFO][6825] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" HandleID="k8s-pod-network.007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.255 [INFO][6825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.257530 env[1672]: 2025-09-13 00:55:57.256 [INFO][6808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:57.258137 env[1672]: time="2025-09-13T00:55:57.257553344Z" level=info msg="TearDown network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\" successfully" Sep 13 00:55:57.258137 env[1672]: time="2025-09-13T00:55:57.257583389Z" level=info msg="StopPodSandbox for \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\" returns successfully" Sep 13 00:55:57.258137 env[1672]: time="2025-09-13T00:55:57.257968716Z" level=info msg="RemovePodSandbox for \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\"" Sep 13 00:55:57.258137 env[1672]: time="2025-09-13T00:55:57.257997116Z" level=info msg="Forcibly stopping sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\"" Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.285 [WARNING][6853] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0", GenerateName:"calico-apiserver-77cc844975-", Namespace:"calico-apiserver", SelfLink:"", UID:"75fdc49e-31b1-401f-8cb1-69f2cb356414", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cc844975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"ed2e498081bb291867593e35871513dd0f446bb1a8db02d3b397b2c66552ddf1", Pod:"calico-apiserver-77cc844975-jtt5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1439d37418", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.286 [INFO][6853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.286 [INFO][6853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" iface="eth0" netns="" Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.286 [INFO][6853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.286 [INFO][6853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.302 [INFO][6870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" HandleID="k8s-pod-network.007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.302 [INFO][6870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.302 [INFO][6870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.308 [WARNING][6870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" HandleID="k8s-pod-network.007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.308 [INFO][6870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" HandleID="k8s-pod-network.007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--jtt5x-eth0" Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.309 [INFO][6870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.312306 env[1672]: 2025-09-13 00:55:57.311 [INFO][6853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91" Sep 13 00:55:57.312856 env[1672]: time="2025-09-13T00:55:57.312330564Z" level=info msg="TearDown network for sandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\" successfully" Sep 13 00:55:57.314651 env[1672]: time="2025-09-13T00:55:57.314600757Z" level=info msg="RemovePodSandbox \"007f1563500858556d34c41f4f25276fcf614b27093c3a53376073e7091b0a91\" returns successfully" Sep 13 00:55:57.315027 env[1672]: time="2025-09-13T00:55:57.314973013Z" level=info msg="StopPodSandbox for \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\"" Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.341 [WARNING][6898] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.341 [INFO][6898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.341 [INFO][6898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" iface="eth0" netns="" Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.341 [INFO][6898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.341 [INFO][6898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.359 [INFO][6911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" HandleID="k8s-pod-network.49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.359 [INFO][6911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.359 [INFO][6911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.367 [WARNING][6911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" HandleID="k8s-pod-network.49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.367 [INFO][6911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" HandleID="k8s-pod-network.49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.369 [INFO][6911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.373577 env[1672]: 2025-09-13 00:55:57.371 [INFO][6898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:57.374995 env[1672]: time="2025-09-13T00:55:57.373627436Z" level=info msg="TearDown network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\" successfully" Sep 13 00:55:57.374995 env[1672]: time="2025-09-13T00:55:57.373697342Z" level=info msg="StopPodSandbox for \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\" returns successfully" Sep 13 00:55:57.374995 env[1672]: time="2025-09-13T00:55:57.374500442Z" level=info msg="RemovePodSandbox for \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\"" Sep 13 00:55:57.374995 env[1672]: time="2025-09-13T00:55:57.374584197Z" level=info msg="Forcibly stopping sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\"" Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.429 [WARNING][6937] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" WorkloadEndpoint="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.429 [INFO][6937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.429 [INFO][6937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" iface="eth0" netns="" Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.429 [INFO][6937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.429 [INFO][6937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.458 [INFO][6954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" HandleID="k8s-pod-network.49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.458 [INFO][6954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.458 [INFO][6954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.463 [WARNING][6954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" HandleID="k8s-pod-network.49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.463 [INFO][6954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" HandleID="k8s-pod-network.49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-whisker--7c8756dc7f--dsnw2-eth0" Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.465 [INFO][6954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.467048 env[1672]: 2025-09-13 00:55:57.466 [INFO][6937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89" Sep 13 00:55:57.467048 env[1672]: time="2025-09-13T00:55:57.467028022Z" level=info msg="TearDown network for sandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\" successfully" Sep 13 00:55:57.468815 env[1672]: time="2025-09-13T00:55:57.468767339Z" level=info msg="RemovePodSandbox \"49882d32db3d23ec86f9d3d1d30ebe5c63efe894232aa917426c1d6c3b176c89\" returns successfully" Sep 13 00:55:57.469103 env[1672]: time="2025-09-13T00:55:57.469066313Z" level=info msg="StopPodSandbox for \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\"" Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.491 [WARNING][6979] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0", GenerateName:"calico-kube-controllers-fddd77667-", Namespace:"calico-system", SelfLink:"", UID:"afe91dfa-20ea-43ed-b9ae-2f363b41f123", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fddd77667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc", Pod:"calico-kube-controllers-fddd77667-rhh4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied561b82b78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.491 [INFO][6979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.491 [INFO][6979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" iface="eth0" netns="" Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.491 [INFO][6979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.491 [INFO][6979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.504 [INFO][6995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" HandleID="k8s-pod-network.c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.504 [INFO][6995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.504 [INFO][6995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.509 [WARNING][6995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" HandleID="k8s-pod-network.c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.509 [INFO][6995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" HandleID="k8s-pod-network.c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.510 [INFO][6995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.512934 env[1672]: 2025-09-13 00:55:57.511 [INFO][6979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:57.513379 env[1672]: time="2025-09-13T00:55:57.512926648Z" level=info msg="TearDown network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\" successfully" Sep 13 00:55:57.513379 env[1672]: time="2025-09-13T00:55:57.512953513Z" level=info msg="StopPodSandbox for \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\" returns successfully" Sep 13 00:55:57.513379 env[1672]: time="2025-09-13T00:55:57.513259530Z" level=info msg="RemovePodSandbox for \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\"" Sep 13 00:55:57.513379 env[1672]: time="2025-09-13T00:55:57.513286524Z" level=info msg="Forcibly stopping sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\"" Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.537 [WARNING][7021] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0", GenerateName:"calico-kube-controllers-fddd77667-", Namespace:"calico-system", SelfLink:"", UID:"afe91dfa-20ea-43ed-b9ae-2f363b41f123", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fddd77667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"2f9bee44d13d399fd8b5ec2a6f5cada1caf3e9350b47ced9a2868959cc6984bc", Pod:"calico-kube-controllers-fddd77667-rhh4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied561b82b78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.537 [INFO][7021] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.537 [INFO][7021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" iface="eth0" netns="" Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.537 [INFO][7021] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.537 [INFO][7021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.555 [INFO][7039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" HandleID="k8s-pod-network.c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.556 [INFO][7039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.556 [INFO][7039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.561 [WARNING][7039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" HandleID="k8s-pod-network.c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.561 [INFO][7039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" HandleID="k8s-pod-network.c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--kube--controllers--fddd77667--rhh4p-eth0" Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.562 [INFO][7039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.565092 env[1672]: 2025-09-13 00:55:57.563 [INFO][7021] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57" Sep 13 00:55:57.565616 env[1672]: time="2025-09-13T00:55:57.565126533Z" level=info msg="TearDown network for sandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\" successfully" Sep 13 00:55:57.567122 env[1672]: time="2025-09-13T00:55:57.567103381Z" level=info msg="RemovePodSandbox \"c2d22e714f77514112ad1521360fc085fa7f95d2af9759d9df14943c60a0cd57\" returns successfully" Sep 13 00:55:57.567494 env[1672]: time="2025-09-13T00:55:57.567472905Z" level=info msg="StopPodSandbox for \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\"" Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.589 [WARNING][7062] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76f0a7cf-aca7-4535-904d-665ae5104c51", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b", Pod:"csi-node-driver-rzrs8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e628326281", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.589 [INFO][7062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.589 [INFO][7062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" iface="eth0" netns="" Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.590 [INFO][7062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.590 [INFO][7062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.603 [INFO][7079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" HandleID="k8s-pod-network.ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.603 [INFO][7079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.603 [INFO][7079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.607 [WARNING][7079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" HandleID="k8s-pod-network.ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.607 [INFO][7079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" HandleID="k8s-pod-network.ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.609 [INFO][7079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.611101 env[1672]: 2025-09-13 00:55:57.610 [INFO][7062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:57.611569 env[1672]: time="2025-09-13T00:55:57.611123602Z" level=info msg="TearDown network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\" successfully" Sep 13 00:55:57.611569 env[1672]: time="2025-09-13T00:55:57.611147597Z" level=info msg="StopPodSandbox for \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\" returns successfully" Sep 13 00:55:57.611569 env[1672]: time="2025-09-13T00:55:57.611449848Z" level=info msg="RemovePodSandbox for \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\"" Sep 13 00:55:57.611569 env[1672]: time="2025-09-13T00:55:57.611479796Z" level=info msg="Forcibly stopping sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\"" Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.633 [WARNING][7103] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76f0a7cf-aca7-4535-904d-665ae5104c51", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"c702f6eebfe7e4a14623fa25f38ee83519f1a41e0bae0eacde7aaebabd92b38b", Pod:"csi-node-driver-rzrs8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e628326281", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.633 [INFO][7103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.634 [INFO][7103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" iface="eth0" netns="" Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.634 [INFO][7103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.634 [INFO][7103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.647 [INFO][7119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" HandleID="k8s-pod-network.ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.647 [INFO][7119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.647 [INFO][7119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.651 [WARNING][7119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" HandleID="k8s-pod-network.ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.651 [INFO][7119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" HandleID="k8s-pod-network.ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-csi--node--driver--rzrs8-eth0" Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.652 [INFO][7119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.654741 env[1672]: 2025-09-13 00:55:57.653 [INFO][7103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298" Sep 13 00:55:57.655196 env[1672]: time="2025-09-13T00:55:57.654774419Z" level=info msg="TearDown network for sandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\" successfully" Sep 13 00:55:57.656550 env[1672]: time="2025-09-13T00:55:57.656534042Z" level=info msg="RemovePodSandbox \"ce67cdcc08927cc7c896636d183b1e05c79691228b2ba600a5a713c4bf0fb298\" returns successfully" Sep 13 00:55:57.656938 env[1672]: time="2025-09-13T00:55:57.656920786Z" level=info msg="StopPodSandbox for \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\"" Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.679 [WARNING][7144] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1ce64396-8b92-4683-bf8f-d8bcb3fc6a06", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c", Pod:"coredns-7c65d6cfc9-ht5gv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2133985625d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.679 [INFO][7144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.679 [INFO][7144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" iface="eth0" netns="" Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.679 [INFO][7144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.679 [INFO][7144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.693 [INFO][7160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" HandleID="k8s-pod-network.2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.693 [INFO][7160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.693 [INFO][7160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.699 [WARNING][7160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" HandleID="k8s-pod-network.2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.699 [INFO][7160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" HandleID="k8s-pod-network.2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.700 [INFO][7160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.702454 env[1672]: 2025-09-13 00:55:57.701 [INFO][7144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:57.702959 env[1672]: time="2025-09-13T00:55:57.702474393Z" level=info msg="TearDown network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\" successfully" Sep 13 00:55:57.702959 env[1672]: time="2025-09-13T00:55:57.702499446Z" level=info msg="StopPodSandbox for \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\" returns successfully" Sep 13 00:55:57.702959 env[1672]: time="2025-09-13T00:55:57.702821002Z" level=info msg="RemovePodSandbox for \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\"" Sep 13 00:55:57.702959 env[1672]: time="2025-09-13T00:55:57.702843134Z" level=info msg="Forcibly stopping sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\"" Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.735 [WARNING][7188] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1ce64396-8b92-4683-bf8f-d8bcb3fc6a06", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"a64fb73c4bc920369b1de063b5951e059ee8763c7e786bae3c97530bbfa0a48c", Pod:"coredns-7c65d6cfc9-ht5gv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2133985625d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.736 [INFO][7188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.736 [INFO][7188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" iface="eth0" netns="" Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.736 [INFO][7188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.736 [INFO][7188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.780 [INFO][7206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" HandleID="k8s-pod-network.2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.780 [INFO][7206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.780 [INFO][7206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.795 [WARNING][7206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" HandleID="k8s-pod-network.2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.795 [INFO][7206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" HandleID="k8s-pod-network.2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-coredns--7c65d6cfc9--ht5gv-eth0" Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.798 [INFO][7206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.804528 env[1672]: 2025-09-13 00:55:57.801 [INFO][7188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f" Sep 13 00:55:57.806024 env[1672]: time="2025-09-13T00:55:57.804541414Z" level=info msg="TearDown network for sandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\" successfully" Sep 13 00:55:57.821464 env[1672]: time="2025-09-13T00:55:57.821338277Z" level=info msg="RemovePodSandbox \"2dc7481180b79fd07c6803ab412db12ac43a06e4435a06f3d0ccd98e738a015f\" returns successfully" Sep 13 00:55:57.822306 env[1672]: time="2025-09-13T00:55:57.822241722Z" level=info msg="StopPodSandbox for \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\"" Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.901 [WARNING][7234] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0", GenerateName:"calico-apiserver-77cc844975-", Namespace:"calico-apiserver", SelfLink:"", UID:"15b25ee3-c882-4d6d-87fd-8435c4ab9603", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cc844975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4", Pod:"calico-apiserver-77cc844975-7r74t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia674898eb82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.902 [INFO][7234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.902 [INFO][7234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" iface="eth0" netns="" Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.902 [INFO][7234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.902 [INFO][7234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.948 [INFO][7252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" HandleID="k8s-pod-network.36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.949 [INFO][7252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.949 [INFO][7252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.962 [WARNING][7252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" HandleID="k8s-pod-network.36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.963 [INFO][7252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" HandleID="k8s-pod-network.36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.966 [INFO][7252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:57.972618 env[1672]: 2025-09-13 00:55:57.969 [INFO][7234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:57.974644 env[1672]: time="2025-09-13T00:55:57.972630833Z" level=info msg="TearDown network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\" successfully" Sep 13 00:55:57.974644 env[1672]: time="2025-09-13T00:55:57.972693612Z" level=info msg="StopPodSandbox for \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\" returns successfully" Sep 13 00:55:57.974644 env[1672]: time="2025-09-13T00:55:57.973530929Z" level=info msg="RemovePodSandbox for \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\"" Sep 13 00:55:57.974644 env[1672]: time="2025-09-13T00:55:57.973602811Z" level=info msg="Forcibly stopping sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\"" Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.028 [WARNING][7279] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0", GenerateName:"calico-apiserver-77cc844975-", Namespace:"calico-apiserver", SelfLink:"", UID:"15b25ee3-c882-4d6d-87fd-8435c4ab9603", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 55, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77cc844975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-d04f0c45dd", ContainerID:"8ea74d073bbd8b1f1d23bec61c1d4af1287a469979a7e14c7209918d4bedc2f4", Pod:"calico-apiserver-77cc844975-7r74t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia674898eb82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.028 [INFO][7279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.028 [INFO][7279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" iface="eth0" netns="" Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.028 [INFO][7279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.028 [INFO][7279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.044 [INFO][7295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" HandleID="k8s-pod-network.36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.044 [INFO][7295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.044 [INFO][7295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.049 [WARNING][7295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" HandleID="k8s-pod-network.36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.049 [INFO][7295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" HandleID="k8s-pod-network.36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Workload="ci--3510.3.8--n--d04f0c45dd-k8s-calico--apiserver--77cc844975--7r74t-eth0" Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.051 [INFO][7295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:55:58.053345 env[1672]: 2025-09-13 00:55:58.052 [INFO][7279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d" Sep 13 00:55:58.053817 env[1672]: time="2025-09-13T00:55:58.053341194Z" level=info msg="TearDown network for sandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\" successfully" Sep 13 00:55:58.055447 env[1672]: time="2025-09-13T00:55:58.055391446Z" level=info msg="RemovePodSandbox \"36f254ac1929ef820a12a8016351656d9247e41bc2a15b73a67c9787c960667d\" returns successfully" Sep 13 00:56:01.338984 kubelet[2677]: I0913 00:56:01.338879 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:56:01.365000 audit[7311]: NETFILTER_CFG table=filter:121 family=2 entries=8 op=nft_register_rule pid=7311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:01.392613 kernel: kauditd_printk_skb: 14 callbacks suppressed Sep 13 00:56:01.392745 kernel: audit: type=1325 audit(1757724961.365:403): table=filter:121 family=2 entries=8 op=nft_register_rule pid=7311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:01.365000 audit[7311]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffea2c952f0 a2=0 a3=7ffea2c952dc items=0 ppid=2858 pid=7311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:01.544752 kernel: audit: type=1300 audit(1757724961.365:403): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffea2c952f0 a2=0 a3=7ffea2c952dc items=0 ppid=2858 pid=7311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:01.544790 kernel: audit: type=1327 audit(1757724961.365:403): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:01.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:01.609000 audit[7311]: NETFILTER_CFG table=nat:122 family=2 entries=38 op=nft_register_chain pid=7311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:01.609000 audit[7311]: SYSCALL arch=c000003e syscall=46 success=yes exit=12772 a0=3 a1=7ffea2c952f0 a2=0 a3=7ffea2c952dc items=0 ppid=2858 pid=7311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:01.761821 kernel: audit: type=1325 audit(1757724961.609:404): table=nat:122 family=2 entries=38 op=nft_register_chain pid=7311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 00:56:01.761857 kernel: audit: type=1300 audit(1757724961.609:404): arch=c000003e syscall=46 success=yes exit=12772 a0=3 a1=7ffea2c952f0 a2=0 a3=7ffea2c952dc items=0 ppid=2858 pid=7311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:01.761873 kernel: audit: type=1327 audit(1757724961.609:404): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:56:01.609000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 00:59:05.502479 systemd[1]: Started sshd@9-147.75.203.133:22-92.118.39.62:39398.service. Sep 13 00:59:05.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-147.75.203.133:22-92.118.39.62:39398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:05.591449 kernel: audit: type=1130 audit(1757725145.502:405): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-147.75.203.133:22-92.118.39.62:39398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:06.210862 sshd[8079]: Invalid user azureuser from 92.118.39.62 port 39398 Sep 13 00:59:06.385890 sshd[8079]: pam_faillock(sshd:auth): User unknown Sep 13 00:59:06.386988 sshd[8079]: pam_unix(sshd:auth): check pass; user unknown Sep 13 00:59:06.387076 sshd[8079]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.118.39.62 Sep 13 00:59:06.387990 sshd[8079]: pam_faillock(sshd:auth): User unknown Sep 13 00:59:06.386000 audit[8079]: USER_AUTH pid=8079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="azureuser" exe="/usr/sbin/sshd" hostname=92.118.39.62 addr=92.118.39.62 terminal=ssh res=failed' Sep 13 00:59:06.476550 kernel: audit: type=1100 audit(1757725146.386:406): pid=8079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="azureuser" exe="/usr/sbin/sshd" hostname=92.118.39.62 addr=92.118.39.62 terminal=ssh res=failed' Sep 13 00:59:07.877588 sshd[8079]: Failed password for invalid user azureuser from 92.118.39.62 port 39398 ssh2 Sep 13 00:59:08.245538 sshd[8079]: Connection closed by invalid user azureuser 92.118.39.62 port 39398 [preauth] Sep 13 00:59:08.247845 systemd[1]: sshd@9-147.75.203.133:22-92.118.39.62:39398.service: Deactivated successfully. Sep 13 00:59:08.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-147.75.203.133:22-92.118.39.62:39398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:59:08.340553 kernel: audit: type=1131 audit(1757725148.247:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-147.75.203.133:22-92.118.39.62:39398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:01:55.446588 update_engine[1662]: I0913 01:01:55.446524 1662 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 01:01:55.446588 update_engine[1662]: I0913 01:01:55.446566 1662 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 01:01:55.464648 update_engine[1662]: I0913 01:01:55.454840 1662 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 01:01:55.464865 update_engine[1662]: I0913 01:01:55.464776 1662 omaha_request_params.cc:62] Current group set to lts Sep 13 01:01:55.465121 update_engine[1662]: I0913 01:01:55.465072 1662 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 01:01:55.465121 update_engine[1662]: I0913 01:01:55.465092 1662 update_attempter.cc:643] Scheduling an action processor start. Sep 13 01:01:55.465121 update_engine[1662]: I0913 01:01:55.465124 1662 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 01:01:55.465478 update_engine[1662]: I0913 01:01:55.465190 1662 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 01:01:55.465478 update_engine[1662]: I0913 01:01:55.465322 1662 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 01:01:55.465478 update_engine[1662]: I0913 01:01:55.465337 1662 omaha_request_action.cc:271] Request: Sep 13 01:01:55.465478 update_engine[1662]: Sep 13 01:01:55.465478 update_engine[1662]: Sep 13 01:01:55.465478 update_engine[1662]: Sep 13 01:01:55.465478 update_engine[1662]: Sep 13 01:01:55.465478 update_engine[1662]: Sep 13 01:01:55.465478 update_engine[1662]: Sep 13 01:01:55.465478 update_engine[1662]: Sep 13 01:01:55.465478 update_engine[1662]: Sep 13 01:01:55.465478 update_engine[1662]: I0913 01:01:55.465351 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:01:55.466565 locksmithd[1708]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 01:01:55.468465 update_engine[1662]: I0913 01:01:55.468415 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:01:55.468691 update_engine[1662]: E0913 01:01:55.468647 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:01:55.468840 update_engine[1662]: I0913 01:01:55.468811 1662 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 01:02:05.387944 update_engine[1662]: I0913 01:02:05.387847 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:02:05.388887 update_engine[1662]: I0913 01:02:05.388298 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:02:05.388887 update_engine[1662]: E0913 01:02:05.388504 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:02:05.388887 update_engine[1662]: I0913 01:02:05.388664 1662 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 01:02:07.001137 systemd[1]: Started sshd@10-147.75.203.133:22-139.178.89.65:36006.service. Sep 13 01:02:07.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-147.75.203.133:22-139.178.89.65:36006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:07.096371 kernel: audit: type=1130 audit(1757725327.001:408): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-147.75.203.133:22-139.178.89.65:36006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:07.120000 audit[8790]: USER_ACCT pid=8790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.120781 sshd[8790]: Accepted publickey for core from 139.178.89.65 port 36006 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:07.121940 sshd[8790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:07.124367 systemd-logind[1709]: New session 12 of user core. Sep 13 01:02:07.124856 systemd[1]: Started session-12.scope. Sep 13 01:02:07.207993 sshd[8790]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:07.209348 systemd[1]: sshd@10-147.75.203.133:22-139.178.89.65:36006.service: Deactivated successfully. Sep 13 01:02:07.209934 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:02:07.209935 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:02:07.210643 systemd-logind[1709]: Removed session 12. Sep 13 01:02:07.121000 audit[8790]: CRED_ACQ pid=8790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.305298 kernel: audit: type=1101 audit(1757725327.120:409): pid=8790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.305338 kernel: audit: type=1103 audit(1757725327.121:410): pid=8790 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.305357 kernel: audit: type=1006 audit(1757725327.121:411): pid=8790 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Sep 13 01:02:07.121000 audit[8790]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffbae4a20 a2=3 a3=0 items=0 ppid=1 pid=8790 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:07.457958 kernel: audit: type=1300 audit(1757725327.121:411): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffffbae4a20 a2=3 a3=0 items=0 ppid=1 pid=8790 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:07.458027 kernel: audit: type=1327 audit(1757725327.121:411): proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:07.121000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:07.488845 kernel: audit: type=1105 audit(1757725327.126:412): pid=8790 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.126000 audit[8790]: USER_START pid=8790 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.584453 kernel: audit: type=1103 audit(1757725327.127:413): pid=8793 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.127000 audit[8793]: CRED_ACQ pid=8793 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.673975 kernel: audit: type=1106 audit(1757725327.208:414): pid=8790 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.208000 audit[8790]: USER_END pid=8790 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.769989 kernel: audit: type=1104 audit(1757725327.208:415): pid=8790 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.208000 audit[8790]: CRED_DISP pid=8790 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:07.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-147.75.203.133:22-139.178.89.65:36006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:12.211189 systemd[1]: Started sshd@11-147.75.203.133:22-139.178.89.65:55526.service. Sep 13 01:02:12.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-147.75.203.133:22-139.178.89.65:55526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:12.238027 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:02:12.238109 kernel: audit: type=1130 audit(1757725332.210:417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-147.75.203.133:22-139.178.89.65:55526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:12.350000 audit[8865]: USER_ACCT pid=8865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.351388 sshd[8865]: Accepted publickey for core from 139.178.89.65 port 55526 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:12.352661 sshd[8865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:12.354970 systemd-logind[1709]: New session 13 of user core. Sep 13 01:02:12.355440 systemd[1]: Started session-13.scope. Sep 13 01:02:12.437319 sshd[8865]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:12.438838 systemd[1]: sshd@11-147.75.203.133:22-139.178.89.65:55526.service: Deactivated successfully. Sep 13 01:02:12.439400 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:02:12.439431 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:02:12.439896 systemd-logind[1709]: Removed session 13. Sep 13 01:02:12.352000 audit[8865]: CRED_ACQ pid=8865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.534932 kernel: audit: type=1101 audit(1757725332.350:418): pid=8865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.534973 kernel: audit: type=1103 audit(1757725332.352:419): pid=8865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.534995 kernel: audit: type=1006 audit(1757725332.352:420): pid=8865 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Sep 13 01:02:12.593965 kernel: audit: type=1300 audit(1757725332.352:420): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff24f2e560 a2=3 a3=0 items=0 ppid=1 pid=8865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:12.352000 audit[8865]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff24f2e560 a2=3 a3=0 items=0 ppid=1 pid=8865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:12.686721 kernel: audit: type=1327 audit(1757725332.352:420): proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:12.352000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:12.717444 kernel: audit: type=1105 audit(1757725332.357:421): pid=8865 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.357000 audit[8865]: USER_START pid=8865 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.812489 kernel: audit: type=1103 audit(1757725332.357:422): pid=8868 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.357000 audit[8868]: CRED_ACQ pid=8868 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.901496 kernel: audit: type=1106 audit(1757725332.437:423): pid=8865 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.437000 audit[8865]: USER_END pid=8865 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.437000 audit[8865]: CRED_DISP pid=8865 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:13.085816 kernel: audit: type=1104 audit(1757725332.437:424): pid=8865 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:12.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-147.75.203.133:22-139.178.89.65:55526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:15.381251 update_engine[1662]: I0913 01:02:15.381145 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:02:15.382102 update_engine[1662]: I0913 01:02:15.381619 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:02:15.382102 update_engine[1662]: E0913 01:02:15.381808 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:02:15.382102 update_engine[1662]: I0913 01:02:15.381964 1662 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 01:02:17.441114 systemd[1]: Started sshd@12-147.75.203.133:22-139.178.89.65:55536.service. Sep 13 01:02:17.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-147.75.203.133:22-139.178.89.65:55536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:17.468354 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:02:17.468443 kernel: audit: type=1130 audit(1757725337.440:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-147.75.203.133:22-139.178.89.65:55536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:17.581000 audit[8894]: USER_ACCT pid=8894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.581916 sshd[8894]: Accepted publickey for core from 139.178.89.65 port 55536 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:17.583052 sshd[8894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:17.585308 systemd-logind[1709]: New session 14 of user core. Sep 13 01:02:17.585784 systemd[1]: Started session-14.scope. Sep 13 01:02:17.582000 audit[8894]: CRED_ACQ pid=8894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.673444 kernel: audit: type=1101 audit(1757725337.581:427): pid=8894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.673474 kernel: audit: type=1103 audit(1757725337.582:428): pid=8894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.748453 sshd[8894]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:17.749981 systemd[1]: Started sshd@13-147.75.203.133:22-139.178.89.65:55548.service. Sep 13 01:02:17.750298 systemd[1]: sshd@12-147.75.203.133:22-139.178.89.65:55536.service: Deactivated successfully. Sep 13 01:02:17.750808 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:02:17.750852 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:02:17.751315 systemd-logind[1709]: Removed session 14. Sep 13 01:02:17.821661 kernel: audit: type=1006 audit(1757725337.582:429): pid=8894 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Sep 13 01:02:17.821702 kernel: audit: type=1300 audit(1757725337.582:429): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea0305c30 a2=3 a3=0 items=0 ppid=1 pid=8894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:17.582000 audit[8894]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea0305c30 a2=3 a3=0 items=0 ppid=1 pid=8894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:17.845495 sshd[8920]: Accepted publickey for core from 139.178.89.65 port 55548 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:17.846970 sshd[8920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:17.849257 systemd-logind[1709]: New session 15 of user core. Sep 13 01:02:17.849742 systemd[1]: Started session-15.scope. Sep 13 01:02:17.582000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:17.941772 sshd[8920]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:17.943383 systemd[1]: Started sshd@14-147.75.203.133:22-139.178.89.65:55552.service. Sep 13 01:02:17.943826 systemd[1]: sshd@13-147.75.203.133:22-139.178.89.65:55548.service: Deactivated successfully. Sep 13 01:02:17.943922 kernel: audit: type=1327 audit(1757725337.582:429): proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:17.943959 kernel: audit: type=1105 audit(1757725337.587:430): pid=8894 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.587000 audit[8894]: USER_START pid=8894 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.944592 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:02:17.944613 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:02:17.945045 systemd-logind[1709]: Removed session 15. Sep 13 01:02:18.038092 kernel: audit: type=1103 audit(1757725337.588:431): pid=8897 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.588000 audit[8897]: CRED_ACQ pid=8897 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:18.127015 kernel: audit: type=1106 audit(1757725337.748:432): pid=8894 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.748000 audit[8894]: USER_END pid=8894 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:18.150804 sshd[8944]: Accepted publickey for core from 139.178.89.65 port 55552 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:18.151556 sshd[8944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:18.153760 systemd-logind[1709]: New session 16 of user core. Sep 13 01:02:18.154223 systemd[1]: Started session-16.scope. Sep 13 01:02:17.748000 audit[8894]: CRED_DISP pid=8894 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:18.232397 sshd[8944]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:18.233920 systemd[1]: sshd@14-147.75.203.133:22-139.178.89.65:55552.service: Deactivated successfully. Sep 13 01:02:18.234565 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:02:18.234598 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:02:18.235131 systemd-logind[1709]: Removed session 16. Sep 13 01:02:18.311447 kernel: audit: type=1104 audit(1757725337.748:433): pid=8894 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-147.75.203.133:22-139.178.89.65:55548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:17.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-147.75.203.133:22-139.178.89.65:55536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:17.845000 audit[8920]: USER_ACCT pid=8920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.846000 audit[8920]: CRED_ACQ pid=8920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.846000 audit[8920]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca7beaa20 a2=3 a3=0 items=0 ppid=1 pid=8920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:17.846000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:17.851000 audit[8920]: USER_START pid=8920 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.852000 audit[8924]: CRED_ACQ pid=8924 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.942000 audit[8920]: USER_END pid=8920 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.942000 audit[8920]: CRED_DISP pid=8920 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:17.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-147.75.203.133:22-139.178.89.65:55552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:17.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-147.75.203.133:22-139.178.89.65:55548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:18.150000 audit[8944]: USER_ACCT pid=8944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:18.150000 audit[8944]: CRED_ACQ pid=8944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:18.151000 audit[8944]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd6748630 a2=3 a3=0 items=0 ppid=1 pid=8944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:18.151000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:18.155000 audit[8944]: USER_START pid=8944 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:18.156000 audit[8948]: CRED_ACQ pid=8948 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:18.232000 audit[8944]: USER_END pid=8944 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:18.232000 audit[8944]: CRED_DISP pid=8944 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:18.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-147.75.203.133:22-139.178.89.65:55552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:23.240028 systemd[1]: Started sshd@15-147.75.203.133:22-139.178.89.65:50148.service. Sep 13 01:02:23.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-147.75.203.133:22-139.178.89.65:50148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:23.282262 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 01:02:23.282351 kernel: audit: type=1130 audit(1757725343.239:453): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-147.75.203.133:22-139.178.89.65:50148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:23.394000 audit[9008]: USER_ACCT pid=9008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.396496 sshd[9008]: Accepted publickey for core from 139.178.89.65 port 50148 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:23.399863 sshd[9008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:23.410031 systemd-logind[1709]: New session 17 of user core. Sep 13 01:02:23.412000 systemd[1]: Started session-17.scope. Sep 13 01:02:23.397000 audit[9008]: CRED_ACQ pid=9008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.497720 sshd[9008]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:23.499126 systemd[1]: sshd@15-147.75.203.133:22-139.178.89.65:50148.service: Deactivated successfully. Sep 13 01:02:23.499751 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:02:23.499752 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:02:23.500193 systemd-logind[1709]: Removed session 17. Sep 13 01:02:23.577756 kernel: audit: type=1101 audit(1757725343.394:454): pid=9008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.577822 kernel: audit: type=1103 audit(1757725343.397:455): pid=9008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.577842 kernel: audit: type=1006 audit(1757725343.397:456): pid=9008 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Sep 13 01:02:23.636273 kernel: audit: type=1300 audit(1757725343.397:456): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8547a5f0 a2=3 a3=0 items=0 ppid=1 pid=9008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:23.397000 audit[9008]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8547a5f0 a2=3 a3=0 items=0 ppid=1 pid=9008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:23.728243 kernel: audit: type=1327 audit(1757725343.397:456): proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:23.397000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:23.758670 kernel: audit: type=1105 audit(1757725343.418:457): pid=9008 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.418000 audit[9008]: USER_START pid=9008 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.852920 kernel: audit: type=1103 audit(1757725343.419:458): pid=9011 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.419000 audit[9011]: CRED_ACQ pid=9011 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.941949 kernel: audit: type=1106 audit(1757725343.497:459): pid=9008 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.497000 audit[9008]: USER_END pid=9008 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.497000 audit[9008]: CRED_DISP pid=9008 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:24.126445 kernel: audit: type=1104 audit(1757725343.497:460): pid=9008 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:23.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-147.75.203.133:22-139.178.89.65:50148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:25.379473 update_engine[1662]: I0913 01:02:25.379350 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:02:25.380290 update_engine[1662]: I0913 01:02:25.379818 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:02:25.380290 update_engine[1662]: E0913 01:02:25.380016 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:02:25.380290 update_engine[1662]: I0913 01:02:25.380155 1662 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 01:02:25.380290 update_engine[1662]: I0913 01:02:25.380170 1662 omaha_request_action.cc:621] Omaha request response: Sep 13 01:02:25.380709 update_engine[1662]: E0913 01:02:25.380304 1662 omaha_request_action.cc:640] Omaha request network transfer failed. Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380332 1662 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380342 1662 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380350 1662 update_attempter.cc:306] Processing Done. Sep 13 01:02:25.380709 update_engine[1662]: E0913 01:02:25.380386 1662 update_attempter.cc:619] Update failed. Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380396 1662 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380404 1662 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380414 1662 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380555 1662 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380602 1662 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380611 1662 omaha_request_action.cc:271] Request: Sep 13 01:02:25.380709 update_engine[1662]: Sep 13 01:02:25.380709 update_engine[1662]: Sep 13 01:02:25.380709 update_engine[1662]: Sep 13 01:02:25.380709 update_engine[1662]: Sep 13 01:02:25.380709 update_engine[1662]: Sep 13 01:02:25.380709 update_engine[1662]: Sep 13 01:02:25.380709 update_engine[1662]: I0913 01:02:25.380620 1662 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:02:25.382235 update_engine[1662]: I0913 01:02:25.380925 1662 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:02:25.382235 update_engine[1662]: E0913 01:02:25.381081 1662 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:02:25.382235 update_engine[1662]: I0913 01:02:25.381204 1662 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 01:02:25.382235 update_engine[1662]: I0913 01:02:25.381217 1662 omaha_request_action.cc:621] Omaha request response: Sep 13 01:02:25.382235 update_engine[1662]: I0913 01:02:25.381227 1662 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 01:02:25.382235 update_engine[1662]: I0913 01:02:25.381235 1662 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 01:02:25.382235 update_engine[1662]: I0913 01:02:25.381242 1662 update_attempter.cc:306] Processing Done. Sep 13 01:02:25.382235 update_engine[1662]: I0913 01:02:25.381250 1662 update_attempter.cc:310] Error event sent. Sep 13 01:02:25.382235 update_engine[1662]: I0913 01:02:25.381269 1662 update_check_scheduler.cc:74] Next update check in 40m10s Sep 13 01:02:25.383026 locksmithd[1708]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 01:02:25.383026 locksmithd[1708]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 01:02:28.503981 systemd[1]: Started sshd@16-147.75.203.133:22-139.178.89.65:50158.service. Sep 13 01:02:28.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-147.75.203.133:22-139.178.89.65:50158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:28.531025 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:02:28.531099 kernel: audit: type=1130 audit(1757725348.502:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-147.75.203.133:22-139.178.89.65:50158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:28.644000 audit[9052]: USER_ACCT pid=9052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:28.645863 sshd[9052]: Accepted publickey for core from 139.178.89.65 port 50158 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:28.649483 sshd[9052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:28.654030 systemd-logind[1709]: New session 18 of user core. Sep 13 01:02:28.654510 systemd[1]: Started session-18.scope. Sep 13 01:02:28.733137 sshd[9052]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:28.734519 systemd[1]: sshd@16-147.75.203.133:22-139.178.89.65:50158.service: Deactivated successfully. Sep 13 01:02:28.735111 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:02:28.735144 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:02:28.735598 systemd-logind[1709]: Removed session 18. Sep 13 01:02:28.647000 audit[9052]: CRED_ACQ pid=9052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:28.827296 kernel: audit: type=1101 audit(1757725348.644:463): pid=9052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:28.827335 kernel: audit: type=1103 audit(1757725348.647:464): pid=9052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:28.827352 kernel: audit: type=1006 audit(1757725348.647:465): pid=9052 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Sep 13 01:02:28.885814 kernel: audit: type=1300 audit(1757725348.647:465): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff06543f10 a2=3 a3=0 items=0 ppid=1 pid=9052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:28.647000 audit[9052]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff06543f10 a2=3 a3=0 items=0 ppid=1 pid=9052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:28.977786 kernel: audit: type=1327 audit(1757725348.647:465): proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:28.647000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:29.008209 kernel: audit: type=1105 audit(1757725348.655:466): pid=9052 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:28.655000 audit[9052]: USER_START pid=9052 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:29.102444 kernel: audit: type=1103 audit(1757725348.655:467): pid=9055 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:28.655000 audit[9055]: CRED_ACQ pid=9055 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:29.191429 kernel: audit: type=1106 audit(1757725348.732:468): pid=9052 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:28.732000 audit[9052]: USER_END pid=9052 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:29.286719 kernel: audit: type=1104 audit(1757725348.732:469): pid=9052 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:28.732000 audit[9052]: CRED_DISP pid=9052 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:28.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-147.75.203.133:22-139.178.89.65:50158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:33.739858 systemd[1]: Started sshd@17-147.75.203.133:22-139.178.89.65:48226.service. Sep 13 01:02:33.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-147.75.203.133:22-139.178.89.65:48226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:33.767001 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:02:33.767080 kernel: audit: type=1130 audit(1757725353.738:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-147.75.203.133:22-139.178.89.65:48226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:33.879000 audit[9079]: USER_ACCT pid=9079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:33.881035 sshd[9079]: Accepted publickey for core from 139.178.89.65 port 48226 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:33.881677 sshd[9079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:33.884076 systemd-logind[1709]: New session 19 of user core. Sep 13 01:02:33.884540 systemd[1]: Started session-19.scope. Sep 13 01:02:33.962759 sshd[9079]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:33.964156 systemd[1]: sshd@17-147.75.203.133:22-139.178.89.65:48226.service: Deactivated successfully. Sep 13 01:02:33.964787 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:02:33.964819 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:02:33.965253 systemd-logind[1709]: Removed session 19. Sep 13 01:02:33.880000 audit[9079]: CRED_ACQ pid=9079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:34.062445 kernel: audit: type=1101 audit(1757725353.879:472): pid=9079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:34.062484 kernel: audit: type=1103 audit(1757725353.880:473): pid=9079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:34.062501 kernel: audit: type=1006 audit(1757725353.880:474): pid=9079 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Sep 13 01:02:34.120883 kernel: audit: type=1300 audit(1757725353.880:474): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef452fbf0 a2=3 a3=0 items=0 ppid=1 pid=9079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:33.880000 audit[9079]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef452fbf0 a2=3 a3=0 items=0 ppid=1 pid=9079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:34.212764 kernel: audit: type=1327 audit(1757725353.880:474): proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:33.880000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:34.243136 kernel: audit: type=1105 audit(1757725353.885:475): pid=9079 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:33.885000 audit[9079]: USER_START pid=9079 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:34.337313 kernel: audit: type=1103 audit(1757725353.885:476): pid=9082 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:33.885000 audit[9082]: CRED_ACQ pid=9082 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:34.426241 kernel: audit: type=1106 audit(1757725353.962:477): pid=9079 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:33.962000 audit[9079]: USER_END pid=9079 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:33.962000 audit[9079]: CRED_DISP pid=9079 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:34.610519 kernel: audit: type=1104 audit(1757725353.962:478): pid=9079 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:33.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-147.75.203.133:22-139.178.89.65:48226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:38.965724 systemd[1]: Started sshd@18-147.75.203.133:22-139.178.89.65:48238.service. Sep 13 01:02:38.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-147.75.203.133:22-139.178.89.65:48238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:38.992141 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:02:38.992263 kernel: audit: type=1130 audit(1757725358.964:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-147.75.203.133:22-139.178.89.65:48238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:39.103000 audit[9136]: USER_ACCT pid=9136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.105092 sshd[9136]: Accepted publickey for core from 139.178.89.65 port 48238 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:39.106564 sshd[9136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:39.109131 systemd-logind[1709]: New session 20 of user core. Sep 13 01:02:39.109651 systemd[1]: Started session-20.scope. Sep 13 01:02:39.189391 sshd[9136]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:39.190873 systemd[1]: sshd@18-147.75.203.133:22-139.178.89.65:48238.service: Deactivated successfully. Sep 13 01:02:39.191569 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:02:39.191604 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:02:39.192059 systemd-logind[1709]: Removed session 20. Sep 13 01:02:39.104000 audit[9136]: CRED_ACQ pid=9136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.286972 kernel: audit: type=1101 audit(1757725359.103:481): pid=9136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.287060 kernel: audit: type=1103 audit(1757725359.104:482): pid=9136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.287082 kernel: audit: type=1006 audit(1757725359.105:483): pid=9136 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Sep 13 01:02:39.288665 systemd[1]: Started sshd@19-147.75.203.133:22-139.178.89.65:48246.service. Sep 13 01:02:39.105000 audit[9136]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd19a8e080 a2=3 a3=0 items=0 ppid=1 pid=9136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:39.369249 sshd[9161]: Accepted publickey for core from 139.178.89.65 port 48246 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:39.371301 sshd[9161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:39.373836 systemd-logind[1709]: New session 21 of user core. Sep 13 01:02:39.374223 systemd[1]: Started session-21.scope. Sep 13 01:02:39.437465 kernel: audit: type=1300 audit(1757725359.105:483): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd19a8e080 a2=3 a3=0 items=0 ppid=1 pid=9136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:39.437533 kernel: audit: type=1327 audit(1757725359.105:483): proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:39.105000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:39.467934 kernel: audit: type=1105 audit(1757725359.110:484): pid=9136 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.110000 audit[9136]: USER_START pid=9136 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.111000 audit[9139]: CRED_ACQ pid=9139 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.609096 systemd[1]: Started sshd@20-147.75.203.133:22-139.178.89.65:48250.service. Sep 13 01:02:39.651123 kernel: audit: type=1103 audit(1757725359.111:485): pid=9139 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.651166 kernel: audit: type=1106 audit(1757725359.188:486): pid=9136 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.188000 audit[9136]: USER_END pid=9136 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.651351 sshd[9161]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:39.652721 systemd[1]: sshd@19-147.75.203.133:22-139.178.89.65:48246.service: Deactivated successfully. Sep 13 01:02:39.653318 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:02:39.653355 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:02:39.653886 systemd-logind[1709]: Removed session 21. Sep 13 01:02:39.746382 kernel: audit: type=1104 audit(1757725359.188:487): pid=9136 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.188000 audit[9136]: CRED_DISP pid=9136 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.770284 sshd[9181]: Accepted publickey for core from 139.178.89.65 port 48250 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:39.771650 sshd[9181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:39.773918 systemd-logind[1709]: New session 22 of user core. Sep 13 01:02:39.774441 systemd[1]: Started session-22.scope. Sep 13 01:02:39.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-147.75.203.133:22-139.178.89.65:48238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:39.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-147.75.203.133:22-139.178.89.65:48246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:39.367000 audit[9161]: USER_ACCT pid=9161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.369000 audit[9161]: CRED_ACQ pid=9161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.369000 audit[9161]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe61e7fb50 a2=3 a3=0 items=0 ppid=1 pid=9161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:39.369000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:39.374000 audit[9161]: USER_START pid=9161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.375000 audit[9164]: CRED_ACQ pid=9164 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-147.75.203.133:22-139.178.89.65:48250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:39.650000 audit[9161]: USER_END pid=9161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.650000 audit[9161]: CRED_DISP pid=9161 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-147.75.203.133:22-139.178.89.65:48246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:39.768000 audit[9181]: USER_ACCT pid=9181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.770000 audit[9181]: CRED_ACQ pid=9181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.770000 audit[9181]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda8d31bd0 a2=3 a3=0 items=0 ppid=1 pid=9181 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:39.770000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:39.775000 audit[9181]: USER_START pid=9181 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:39.776000 audit[9186]: CRED_ACQ pid=9186 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:40.865000 audit[9216]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=9216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:02:40.865000 audit[9216]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff35533130 a2=0 a3=7fff3553311c items=0 ppid=2858 pid=9216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:40.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:02:40.875017 sshd[9181]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:40.875000 audit[9181]: USER_END pid=9181 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:40.875000 audit[9181]: CRED_DISP pid=9181 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:40.877884 systemd[1]: Started sshd@21-147.75.203.133:22-139.178.89.65:51200.service. Sep 13 01:02:40.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-147.75.203.133:22-139.178.89.65:51200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:40.878582 systemd[1]: sshd@20-147.75.203.133:22-139.178.89.65:48250.service: Deactivated successfully. Sep 13 01:02:40.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-147.75.203.133:22-139.178.89.65:48250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:40.878000 audit[9216]: NETFILTER_CFG table=nat:124 family=2 entries=26 op=nft_register_rule pid=9216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:02:40.878000 audit[9216]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fff35533130 a2=0 a3=0 items=0 ppid=2858 pid=9216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:40.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:02:40.879671 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:02:40.879745 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:02:40.880494 systemd-logind[1709]: Removed session 22. Sep 13 01:02:40.897000 audit[9223]: NETFILTER_CFG table=filter:125 family=2 entries=32 op=nft_register_rule pid=9223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:02:40.897000 audit[9223]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffdb5ebb420 a2=0 a3=7ffdb5ebb40c items=0 ppid=2858 pid=9223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:40.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:02:40.906000 audit[9223]: NETFILTER_CFG table=nat:126 family=2 entries=26 op=nft_register_rule pid=9223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:02:40.906000 audit[9223]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffdb5ebb420 a2=0 a3=0 items=0 ppid=2858 pid=9223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:40.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:02:40.915000 audit[9217]: USER_ACCT pid=9217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:40.915587 sshd[9217]: Accepted publickey for core from 139.178.89.65 port 51200 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:40.916000 audit[9217]: CRED_ACQ pid=9217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:40.916000 audit[9217]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff33145360 a2=3 a3=0 items=0 ppid=1 pid=9217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:40.916000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:40.917115 sshd[9217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:40.921195 systemd-logind[1709]: New session 23 of user core. Sep 13 01:02:40.922330 systemd[1]: Started session-23.scope. Sep 13 01:02:40.927000 audit[9217]: USER_START pid=9217 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:40.928000 audit[9226]: CRED_ACQ pid=9226 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:41.111679 sshd[9217]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:41.111000 audit[9217]: USER_END pid=9217 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:41.111000 audit[9217]: CRED_DISP pid=9217 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:41.113221 systemd[1]: Started sshd@22-147.75.203.133:22-139.178.89.65:51202.service. Sep 13 01:02:41.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-147.75.203.133:22-139.178.89.65:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:41.113577 systemd[1]: sshd@21-147.75.203.133:22-139.178.89.65:51200.service: Deactivated successfully. Sep 13 01:02:41.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-147.75.203.133:22-139.178.89.65:51200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:41.114150 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:02:41.114179 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:02:41.114694 systemd-logind[1709]: Removed session 23. Sep 13 01:02:41.142000 audit[9246]: USER_ACCT pid=9246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:41.142902 sshd[9246]: Accepted publickey for core from 139.178.89.65 port 51202 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:41.143000 audit[9246]: CRED_ACQ pid=9246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:41.143000 audit[9246]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5173bea0 a2=3 a3=0 items=0 ppid=1 pid=9246 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:41.143000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:41.143674 sshd[9246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:41.146285 systemd-logind[1709]: New session 24 of user core. Sep 13 01:02:41.146870 systemd[1]: Started session-24.scope. Sep 13 01:02:41.149000 audit[9246]: USER_START pid=9246 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:41.150000 audit[9250]: CRED_ACQ pid=9250 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:41.273026 sshd[9246]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:41.273000 audit[9246]: USER_END pid=9246 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:41.273000 audit[9246]: CRED_DISP pid=9246 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:41.274343 systemd[1]: sshd@22-147.75.203.133:22-139.178.89.65:51202.service: Deactivated successfully. Sep 13 01:02:41.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-147.75.203.133:22-139.178.89.65:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:41.274933 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:02:41.274978 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:02:41.275420 systemd-logind[1709]: Removed session 24. Sep 13 01:02:45.627000 audit[9310]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=9310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:02:45.654786 kernel: kauditd_printk_skb: 57 callbacks suppressed Sep 13 01:02:45.654825 kernel: audit: type=1325 audit(1757725365.627:529): table=filter:127 family=2 entries=20 op=nft_register_rule pid=9310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:02:45.627000 audit[9310]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffdbe3cb630 a2=0 a3=7ffdbe3cb61c items=0 ppid=2858 pid=9310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:45.810775 kernel: audit: type=1300 audit(1757725365.627:529): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffdbe3cb630 a2=0 a3=7ffdbe3cb61c items=0 ppid=2858 pid=9310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:45.810809 kernel: audit: type=1327 audit(1757725365.627:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:02:45.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:02:45.882000 audit[9310]: NETFILTER_CFG table=nat:128 family=2 entries=110 op=nft_register_chain pid=9310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:02:45.882000 audit[9310]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffdbe3cb630 a2=0 a3=7ffdbe3cb61c items=0 ppid=2858 pid=9310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:46.039827 kernel: audit: type=1325 audit(1757725365.882:530): table=nat:128 family=2 entries=110 op=nft_register_chain pid=9310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Sep 13 01:02:46.039861 kernel: audit: type=1300 audit(1757725365.882:530): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffdbe3cb630 a2=0 a3=7ffdbe3cb61c items=0 ppid=2858 pid=9310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:46.039876 kernel: audit: type=1327 audit(1757725365.882:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:02:45.882000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Sep 13 01:02:46.278978 systemd[1]: Started sshd@23-147.75.203.133:22-139.178.89.65:51206.service. Sep 13 01:02:46.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-147.75.203.133:22-139.178.89.65:51206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:46.369440 kernel: audit: type=1130 audit(1757725366.278:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-147.75.203.133:22-139.178.89.65:51206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:46.392000 audit[9312]: USER_ACCT pid=9312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:46.393082 sshd[9312]: Accepted publickey for core from 139.178.89.65 port 51206 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:46.394321 sshd[9312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:46.396725 systemd-logind[1709]: New session 25 of user core. Sep 13 01:02:46.397148 systemd[1]: Started session-25.scope. Sep 13 01:02:46.393000 audit[9312]: CRED_ACQ pid=9312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:46.576264 kernel: audit: type=1101 audit(1757725366.392:532): pid=9312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:46.576312 kernel: audit: type=1103 audit(1757725366.393:533): pid=9312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:46.576335 kernel: audit: type=1006 audit(1757725366.393:534): pid=9312 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Sep 13 01:02:46.393000 audit[9312]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe466800c0 a2=3 a3=0 items=0 ppid=1 pid=9312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:46.393000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:46.398000 audit[9312]: USER_START pid=9312 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:46.399000 audit[9315]: CRED_ACQ pid=9315 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:46.668202 sshd[9312]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:46.668000 audit[9312]: USER_END pid=9312 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:46.668000 audit[9312]: CRED_DISP pid=9312 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:46.669558 systemd[1]: sshd@23-147.75.203.133:22-139.178.89.65:51206.service: Deactivated successfully. Sep 13 01:02:46.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-147.75.203.133:22-139.178.89.65:51206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:46.670161 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:02:46.670194 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:02:46.670660 systemd-logind[1709]: Removed session 25. Sep 13 01:02:51.675508 systemd[1]: Started sshd@24-147.75.203.133:22-139.178.89.65:40598.service. Sep 13 01:02:51.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-147.75.203.133:22-139.178.89.65:40598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:51.703302 kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 13 01:02:51.703331 kernel: audit: type=1130 audit(1757725371.674:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-147.75.203.133:22-139.178.89.65:40598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:51.817000 audit[9371]: USER_ACCT pid=9371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:51.817826 sshd[9371]: Accepted publickey for core from 139.178.89.65 port 40598 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:51.818674 sshd[9371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:51.821025 systemd-logind[1709]: New session 26 of user core. Sep 13 01:02:51.821492 systemd[1]: Started session-26.scope. Sep 13 01:02:51.898741 sshd[9371]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:51.900064 systemd[1]: sshd@24-147.75.203.133:22-139.178.89.65:40598.service: Deactivated successfully. Sep 13 01:02:51.900688 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:02:51.900689 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:02:51.901049 systemd-logind[1709]: Removed session 26. Sep 13 01:02:51.817000 audit[9371]: CRED_ACQ pid=9371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:52.000050 kernel: audit: type=1101 audit(1757725371.817:541): pid=9371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:52.000089 kernel: audit: type=1103 audit(1757725371.817:542): pid=9371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:52.000104 kernel: audit: type=1006 audit(1757725371.817:543): pid=9371 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Sep 13 01:02:52.059022 kernel: audit: type=1300 audit(1757725371.817:543): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb6374530 a2=3 a3=0 items=0 ppid=1 pid=9371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:51.817000 audit[9371]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb6374530 a2=3 a3=0 items=0 ppid=1 pid=9371 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:52.151514 kernel: audit: type=1327 audit(1757725371.817:543): proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:51.817000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:52.182122 kernel: audit: type=1105 audit(1757725371.822:544): pid=9371 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:51.822000 audit[9371]: USER_START pid=9371 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:51.823000 audit[9374]: CRED_ACQ pid=9374 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:52.365793 kernel: audit: type=1103 audit(1757725371.823:545): pid=9374 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:52.365834 kernel: audit: type=1106 audit(1757725371.897:546): pid=9371 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:51.897000 audit[9371]: USER_END pid=9371 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:52.461719 kernel: audit: type=1104 audit(1757725371.897:547): pid=9371 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:51.897000 audit[9371]: CRED_DISP pid=9371 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:51.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-147.75.203.133:22-139.178.89.65:40598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:56.905229 systemd[1]: Started sshd@25-147.75.203.133:22-139.178.89.65:40600.service. Sep 13 01:02:56.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-147.75.203.133:22-139.178.89.65:40600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:56.932654 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:02:56.932741 kernel: audit: type=1130 audit(1757725376.903:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-147.75.203.133:22-139.178.89.65:40600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:02:57.046000 audit[9415]: USER_ACCT pid=9415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.047353 sshd[9415]: Accepted publickey for core from 139.178.89.65 port 40600 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:02:57.048667 sshd[9415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:02:57.051029 systemd-logind[1709]: New session 27 of user core. Sep 13 01:02:57.051476 systemd[1]: Started session-27.scope. Sep 13 01:02:57.128264 sshd[9415]: pam_unix(sshd:session): session closed for user core Sep 13 01:02:57.129768 systemd[1]: sshd@25-147.75.203.133:22-139.178.89.65:40600.service: Deactivated successfully. Sep 13 01:02:57.130376 systemd-logind[1709]: Session 27 logged out. Waiting for processes to exit. Sep 13 01:02:57.130390 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 01:02:57.131013 systemd-logind[1709]: Removed session 27. Sep 13 01:02:57.047000 audit[9415]: CRED_ACQ pid=9415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.229314 kernel: audit: type=1101 audit(1757725377.046:550): pid=9415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.229358 kernel: audit: type=1103 audit(1757725377.047:551): pid=9415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.229386 kernel: audit: type=1006 audit(1757725377.047:552): pid=9415 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Sep 13 01:02:57.287793 kernel: audit: type=1300 audit(1757725377.047:552): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde06b8900 a2=3 a3=0 items=0 ppid=1 pid=9415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:57.047000 audit[9415]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde06b8900 a2=3 a3=0 items=0 ppid=1 pid=9415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:02:57.379707 kernel: audit: type=1327 audit(1757725377.047:552): proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:57.047000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:02:57.410103 kernel: audit: type=1105 audit(1757725377.051:553): pid=9415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.051000 audit[9415]: USER_START pid=9415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.052000 audit[9420]: CRED_ACQ pid=9420 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.127000 audit[9415]: USER_END pid=9415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.688846 kernel: audit: type=1103 audit(1757725377.052:554): pid=9420 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.688880 kernel: audit: type=1106 audit(1757725377.127:555): pid=9415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.127000 audit[9415]: CRED_DISP pid=9415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.778024 kernel: audit: type=1104 audit(1757725377.127:556): pid=9415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:02:57.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-147.75.203.133:22-139.178.89.65:40600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:02.134207 systemd[1]: Started sshd@26-147.75.203.133:22-139.178.89.65:56478.service. Sep 13 01:03:02.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-147.75.203.133:22-139.178.89.65:56478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:02.161273 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 13 01:03:02.161307 kernel: audit: type=1130 audit(1757725382.132:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-147.75.203.133:22-139.178.89.65:56478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:03:02.273000 audit[9443]: USER_ACCT pid=9443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.275482 sshd[9443]: Accepted publickey for core from 139.178.89.65 port 56478 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 01:03:02.276698 sshd[9443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:03:02.279248 systemd-logind[1709]: New session 28 of user core. Sep 13 01:03:02.279730 systemd[1]: Started session-28.scope. Sep 13 01:03:02.356944 sshd[9443]: pam_unix(sshd:session): session closed for user core Sep 13 01:03:02.358403 systemd[1]: sshd@26-147.75.203.133:22-139.178.89.65:56478.service: Deactivated successfully. Sep 13 01:03:02.359022 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 01:03:02.359049 systemd-logind[1709]: Session 28 logged out. Waiting for processes to exit. Sep 13 01:03:02.359453 systemd-logind[1709]: Removed session 28. Sep 13 01:03:02.275000 audit[9443]: CRED_ACQ pid=9443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.456843 kernel: audit: type=1101 audit(1757725382.273:559): pid=9443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.456885 kernel: audit: type=1103 audit(1757725382.275:560): pid=9443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.456906 kernel: audit: type=1006 audit(1757725382.275:561): pid=9443 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Sep 13 01:03:02.607463 kernel: audit: type=1300 audit(1757725382.275:561): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb3a45200 a2=3 a3=0 items=0 ppid=1 pid=9443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:02.607517 kernel: audit: type=1327 audit(1757725382.275:561): proctitle=737368643A20636F7265205B707269765D Sep 13 01:03:02.275000 audit[9443]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb3a45200 a2=3 a3=0 items=0 ppid=1 pid=9443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:03:02.275000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Sep 13 01:03:02.280000 audit[9443]: USER_START pid=9443 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.732103 kernel: audit: type=1105 audit(1757725382.280:562): pid=9443 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.732143 kernel: audit: type=1103 audit(1757725382.280:563): pid=9446 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.280000 audit[9446]: CRED_ACQ pid=9446 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.356000 audit[9443]: USER_END pid=9443 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.916544 kernel: audit: type=1106 audit(1757725382.356:564): pid=9443 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.916584 kernel: audit: type=1104 audit(1757725382.356:565): pid=9443 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.356000 audit[9443]: CRED_DISP pid=9443 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Sep 13 01:03:02.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-147.75.203.133:22-139.178.89.65:56478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'