Feb 9 13:15:28.546004 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 9 13:15:28.546017 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 13:15:28.546024 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 13:15:28.546029 kernel: BIOS-provided physical RAM map: Feb 9 13:15:28.546032 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 13:15:28.546036 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 13:15:28.546041 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 13:15:28.546045 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 13:15:28.546048 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 13:15:28.546053 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000061f6efff] usable Feb 9 13:15:28.546057 kernel: BIOS-e820: [mem 0x0000000061f6f000-0x0000000061f6ffff] ACPI NVS Feb 9 13:15:28.546061 kernel: BIOS-e820: [mem 0x0000000061f70000-0x0000000061f70fff] reserved Feb 9 13:15:28.546064 kernel: BIOS-e820: [mem 0x0000000061f71000-0x000000006c0c4fff] usable Feb 9 13:15:28.546068 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Feb 9 13:15:28.546073 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Feb 9 13:15:28.546079 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Feb 9 13:15:28.546083 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Feb 9 13:15:28.546087 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Feb 9 13:15:28.546091 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Feb 9 13:15:28.546095 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 13:15:28.546099 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 13:15:28.546103 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 13:15:28.546107 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 13:15:28.546112 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 13:15:28.546116 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Feb 9 13:15:28.546121 kernel: NX (Execute Disable) protection: active Feb 9 13:15:28.546125 kernel: SMBIOS 3.2.1 present. Feb 9 13:15:28.546129 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 9 13:15:28.546133 kernel: tsc: Detected 3400.000 MHz processor Feb 9 13:15:28.546137 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 13:15:28.546142 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 13:15:28.546146 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 13:15:28.546151 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Feb 9 13:15:28.546155 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 13:15:28.546160 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Feb 9 13:15:28.546164 kernel: Using GB pages for direct mapping Feb 9 13:15:28.546169 kernel: ACPI: Early table checksum verification disabled Feb 9 13:15:28.546173 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 13:15:28.546178 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 13:15:28.546182 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Feb 9 13:15:28.546188 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 13:15:28.546193 kernel: ACPI: FACS 0x000000006D762F80 000040 Feb 9 13:15:28.546199 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Feb 9 13:15:28.546204 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Feb 9 13:15:28.546208 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 13:15:28.546213 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 13:15:28.546218 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 13:15:28.546222 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 13:15:28.546227 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 13:15:28.546232 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 13:15:28.546237 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:15:28.546242 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 13:15:28.546246 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 13:15:28.546251 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:15:28.546256 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:15:28.546260 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 13:15:28.546265 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 13:15:28.546270 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:15:28.546275 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 13:15:28.546280 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 13:15:28.546284 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 9 13:15:28.546289 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 13:15:28.546294 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 13:15:28.546298 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 13:15:28.546303 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 \xf0a 01072009 AMI 00010013) Feb 9 13:15:28.546308 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 13:15:28.546314 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 13:15:28.546318 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 13:15:28.546323 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 13:15:28.546328 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 13:15:28.546332 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Feb 9 13:15:28.546337 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Feb 9 13:15:28.546342 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Feb 9 13:15:28.546346 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Feb 9 13:15:28.546351 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Feb 9 13:15:28.546356 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Feb 9 13:15:28.546361 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Feb 9 13:15:28.546366 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Feb 9 13:15:28.546370 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Feb 9 13:15:28.546375 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Feb 9 13:15:28.546380 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Feb 9 13:15:28.546384 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Feb 9 13:15:28.546389 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Feb 9 13:15:28.546393 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Feb 9 13:15:28.546399 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Feb 9 13:15:28.546404 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Feb 9 13:15:28.546408 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Feb 9 13:15:28.546413 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Feb 9 13:15:28.546417 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Feb 9 13:15:28.546422 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Feb 9 13:15:28.546427 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Feb 9 13:15:28.546431 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Feb 9 13:15:28.546436 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Feb 9 13:15:28.546441 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Feb 9 13:15:28.546446 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Feb 9 13:15:28.546451 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Feb 9 13:15:28.546455 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Feb 9 13:15:28.546460 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Feb 9 13:15:28.546464 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Feb 9 13:15:28.546469 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Feb 9 13:15:28.546474 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Feb 9 13:15:28.546478 kernel: No NUMA configuration found Feb 9 13:15:28.546484 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Feb 9 13:15:28.546489 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Feb 9 13:15:28.546493 kernel: Zone ranges: Feb 9 13:15:28.546498 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 13:15:28.546502 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 13:15:28.546507 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Feb 9 13:15:28.546512 kernel: Movable zone start for each node Feb 9 13:15:28.546516 kernel: Early memory node ranges Feb 9 13:15:28.546521 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 13:15:28.546527 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 13:15:28.546532 kernel: node 0: [mem 0x0000000040400000-0x0000000061f6efff] Feb 9 13:15:28.546536 kernel: node 0: [mem 0x0000000061f71000-0x000000006c0c4fff] Feb 9 13:15:28.546541 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Feb 9 13:15:28.546548 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Feb 9 13:15:28.546553 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Feb 9 13:15:28.546557 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Feb 9 13:15:28.546567 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 13:15:28.546572 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 13:15:28.546577 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 13:15:28.546582 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 13:15:28.546587 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 9 13:15:28.546592 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 9 13:15:28.546597 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Feb 9 13:15:28.546616 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 13:15:28.546621 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 13:15:28.546626 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 13:15:28.546632 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 13:15:28.546637 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 13:15:28.546641 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 13:15:28.546646 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 13:15:28.546651 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 13:15:28.546656 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 13:15:28.546661 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 13:15:28.546665 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 13:15:28.546670 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 13:15:28.546676 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 13:15:28.546680 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 13:15:28.546685 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 13:15:28.546690 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 13:15:28.546695 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 13:15:28.546700 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 13:15:28.546705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 13:15:28.546709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 13:15:28.546714 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 13:15:28.546720 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 13:15:28.546725 kernel: TSC deadline timer available Feb 9 13:15:28.546730 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 13:15:28.546735 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Feb 9 13:15:28.546740 kernel: Booting paravirtualized kernel on bare hardware Feb 9 13:15:28.546744 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 13:15:28.546749 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 13:15:28.546754 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 13:15:28.546759 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 13:15:28.546765 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 13:15:28.546770 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Feb 9 13:15:28.546774 kernel: Policy zone: Normal Feb 9 13:15:28.546780 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 13:15:28.546785 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 13:15:28.546790 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 13:15:28.546795 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 13:15:28.546799 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 13:15:28.546805 kernel: Memory: 32555728K/33281940K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 725952K reserved, 0K cma-reserved) Feb 9 13:15:28.546811 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 13:15:28.546815 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 13:15:28.546820 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 13:15:28.546825 kernel: rcu: Hierarchical RCU implementation. Feb 9 13:15:28.546830 kernel: rcu: RCU event tracing is enabled. Feb 9 13:15:28.546835 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 13:15:28.546840 kernel: Rude variant of Tasks RCU enabled. Feb 9 13:15:28.546844 kernel: Tracing variant of Tasks RCU enabled. Feb 9 13:15:28.546850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 13:15:28.546855 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 13:15:28.546860 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 13:15:28.546865 kernel: random: crng init done Feb 9 13:15:28.546869 kernel: Console: colour dummy device 80x25 Feb 9 13:15:28.546874 kernel: printk: console [tty0] enabled Feb 9 13:15:28.546879 kernel: printk: console [ttyS1] enabled Feb 9 13:15:28.546884 kernel: ACPI: Core revision 20210730 Feb 9 13:15:28.546889 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 9 13:15:28.546895 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 13:15:28.546900 kernel: DMAR: Host address width 39 Feb 9 13:15:28.546904 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 9 13:15:28.546909 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 9 13:15:28.546914 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 13:15:28.546919 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 13:15:28.546924 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Feb 9 13:15:28.546929 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Feb 9 13:15:28.546934 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 9 13:15:28.546939 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 13:15:28.546944 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 13:15:28.546949 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 13:15:28.546954 kernel: x2apic enabled Feb 9 13:15:28.546959 kernel: Switched APIC routing to cluster x2apic. Feb 9 13:15:28.546964 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 13:15:28.546968 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 13:15:28.546973 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 13:15:28.546978 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 13:15:28.546984 kernel: process: using mwait in idle threads Feb 9 13:15:28.546989 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 13:15:28.546994 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 13:15:28.546999 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 13:15:28.547004 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 13:15:28.547009 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 13:15:28.547014 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 13:15:28.547018 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 13:15:28.547023 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 13:15:28.547029 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 13:15:28.547034 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 13:15:28.547039 kernel: TAA: Mitigation: TSX disabled Feb 9 13:15:28.547044 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 13:15:28.547048 kernel: SRBDS: Mitigation: Microcode Feb 9 13:15:28.547053 kernel: GDS: Vulnerable: No microcode Feb 9 13:15:28.547058 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 13:15:28.547063 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 13:15:28.547068 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 13:15:28.547073 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 13:15:28.547078 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 13:15:28.547083 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 13:15:28.547088 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 13:15:28.547093 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 13:15:28.547098 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 13:15:28.547102 kernel: Freeing SMP alternatives memory: 32K Feb 9 13:15:28.547107 kernel: pid_max: default: 32768 minimum: 301 Feb 9 13:15:28.547112 kernel: LSM: Security Framework initializing Feb 9 13:15:28.547118 kernel: SELinux: Initializing. Feb 9 13:15:28.547123 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 13:15:28.547127 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 13:15:28.547132 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 13:15:28.547137 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 13:15:28.547142 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 13:15:28.547147 kernel: ... version: 4 Feb 9 13:15:28.547152 kernel: ... bit width: 48 Feb 9 13:15:28.547157 kernel: ... generic registers: 4 Feb 9 13:15:28.547162 kernel: ... value mask: 0000ffffffffffff Feb 9 13:15:28.547167 kernel: ... max period: 00007fffffffffff Feb 9 13:15:28.547172 kernel: ... fixed-purpose events: 3 Feb 9 13:15:28.547177 kernel: ... event mask: 000000070000000f Feb 9 13:15:28.547182 kernel: signal: max sigframe size: 2032 Feb 9 13:15:28.547186 kernel: rcu: Hierarchical SRCU implementation. Feb 9 13:15:28.547191 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 13:15:28.547196 kernel: smp: Bringing up secondary CPUs ... Feb 9 13:15:28.547201 kernel: x86: Booting SMP configuration: Feb 9 13:15:28.547207 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 13:15:28.547212 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 13:15:28.547217 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 13:15:28.547221 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 13:15:28.547226 kernel: smpboot: Max logical packages: 1 Feb 9 13:15:28.547231 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 13:15:28.547236 kernel: devtmpfs: initialized Feb 9 13:15:28.547241 kernel: x86/mm: Memory block size: 128MB Feb 9 13:15:28.547246 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x61f6f000-0x61f6ffff] (4096 bytes) Feb 9 13:15:28.547251 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Feb 9 13:15:28.547256 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 13:15:28.547261 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 13:15:28.547266 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 13:15:28.547271 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 13:15:28.547276 kernel: audit: initializing netlink subsys (disabled) Feb 9 13:15:28.547280 kernel: audit: type=2000 audit(1707484523.111:1): state=initialized audit_enabled=0 res=1 Feb 9 13:15:28.547285 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 13:15:28.547291 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 13:15:28.547296 kernel: cpuidle: using governor menu Feb 9 13:15:28.547300 kernel: ACPI: bus type PCI registered Feb 9 13:15:28.547305 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 13:15:28.547310 kernel: dca service started, version 1.12.1 Feb 9 13:15:28.547315 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 13:15:28.547320 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 13:15:28.547325 kernel: PCI: Using configuration type 1 for base access Feb 9 13:15:28.547329 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 13:15:28.547335 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 13:15:28.547340 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 13:15:28.547345 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 13:15:28.547350 kernel: ACPI: Added _OSI(Module Device) Feb 9 13:15:28.547354 kernel: ACPI: Added _OSI(Processor Device) Feb 9 13:15:28.547359 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 13:15:28.547364 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 13:15:28.547369 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 13:15:28.547374 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 13:15:28.547380 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 13:15:28.547384 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 13:15:28.547389 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:15:28.547394 kernel: ACPI: SSDT 0xFFFF998840215800 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 13:15:28.547399 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 13:15:28.547404 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:15:28.547409 kernel: ACPI: SSDT 0xFFFF998841CEA800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 13:15:28.547413 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:15:28.547418 kernel: ACPI: SSDT 0xFFFF998841C5E800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 13:15:28.547423 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:15:28.547428 kernel: ACPI: SSDT 0xFFFF998841C59800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 13:15:28.547433 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:15:28.547438 kernel: ACPI: SSDT 0xFFFF99884014A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 13:15:28.547443 kernel: ACPI: Dynamic OEM Table Load: Feb 9 13:15:28.547448 kernel: ACPI: SSDT 0xFFFF998841CEAC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 13:15:28.547452 kernel: ACPI: Interpreter enabled Feb 9 13:15:28.547457 kernel: ACPI: PM: (supports S0 S5) Feb 9 13:15:28.547462 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 13:15:28.547467 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 13:15:28.547473 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 13:15:28.547477 kernel: HEST: Table parsing has been initialized. Feb 9 13:15:28.547482 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 13:15:28.547487 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 13:15:28.547492 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 13:15:28.547497 kernel: ACPI: PM: Power Resource [USBC] Feb 9 13:15:28.547502 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 13:15:28.547507 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 13:15:28.547511 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 13:15:28.547517 kernel: ACPI: PM: Power Resource [WRST] Feb 9 13:15:28.547522 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 13:15:28.547527 kernel: ACPI: PM: Power Resource [FN00] Feb 9 13:15:28.547531 kernel: ACPI: PM: Power Resource [FN01] Feb 9 13:15:28.547536 kernel: ACPI: PM: Power Resource [FN02] Feb 9 13:15:28.547541 kernel: ACPI: PM: Power Resource [FN03] Feb 9 13:15:28.547547 kernel: ACPI: PM: Power Resource [FN04] Feb 9 13:15:28.547568 kernel: ACPI: PM: Power Resource [PIN] Feb 9 13:15:28.547573 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 13:15:28.547653 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 13:15:28.547700 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 13:15:28.547739 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 13:15:28.547746 kernel: PCI host bridge to bus 0000:00 Feb 9 13:15:28.547788 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 13:15:28.547825 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 13:15:28.547861 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 13:15:28.547898 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Feb 9 13:15:28.547934 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 13:15:28.547970 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 13:15:28.548018 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 13:15:28.548067 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 13:15:28.548110 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 13:15:28.548157 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 9 13:15:28.548200 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 9 13:15:28.548247 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 9 13:15:28.548290 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Feb 9 13:15:28.548330 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 9 13:15:28.548372 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 9 13:15:28.548418 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 13:15:28.548460 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Feb 9 13:15:28.548506 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 13:15:28.548549 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Feb 9 13:15:28.548625 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 13:15:28.548666 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Feb 9 13:15:28.548709 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 13:15:28.548755 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 13:15:28.548797 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Feb 9 13:15:28.548837 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Feb 9 13:15:28.548880 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 13:15:28.548921 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 13:15:28.548965 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 13:15:28.549009 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 13:15:28.549054 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 13:15:28.549095 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Feb 9 13:15:28.549136 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 13:15:28.549186 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 13:15:28.549230 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Feb 9 13:15:28.549272 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 13:15:28.549316 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 13:15:28.549357 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Feb 9 13:15:28.549398 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 13:15:28.549444 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 13:15:28.549485 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Feb 9 13:15:28.549526 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Feb 9 13:15:28.549600 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 9 13:15:28.549641 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 9 13:15:28.549681 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 9 13:15:28.549723 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Feb 9 13:15:28.549762 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 13:15:28.549807 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 13:15:28.549850 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 13:15:28.549897 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 13:15:28.549940 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 13:15:28.549984 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 13:15:28.550028 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 13:15:28.550073 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 13:15:28.550114 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 13:15:28.550158 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 9 13:15:28.550199 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 9 13:15:28.550244 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 13:15:28.550284 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 13:15:28.550334 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 13:15:28.550377 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 13:15:28.550418 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Feb 9 13:15:28.550458 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 13:15:28.550504 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 13:15:28.550545 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 13:15:28.550621 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 13:15:28.550668 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 13:15:28.550711 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 13:15:28.550753 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Feb 9 13:15:28.550795 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 9 13:15:28.550837 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 13:15:28.550879 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 13:15:28.550928 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 13:15:28.550970 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 13:15:28.551014 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Feb 9 13:15:28.551056 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 9 13:15:28.551098 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 13:15:28.551139 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 13:15:28.551181 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 13:15:28.551224 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 13:15:28.551264 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 13:15:28.551306 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 13:15:28.551350 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 13:15:28.551393 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Feb 9 13:15:28.551434 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 13:15:28.551476 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Feb 9 13:15:28.551517 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 13:15:28.551582 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 13:15:28.551643 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 13:15:28.551683 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 13:15:28.551731 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 9 13:15:28.551774 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Feb 9 13:15:28.551816 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 13:15:28.551858 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Feb 9 13:15:28.551902 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 9 13:15:28.551942 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 13:15:28.551984 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 13:15:28.552024 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 13:15:28.552065 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 13:15:28.552111 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 13:15:28.552154 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 9 13:15:28.552197 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 13:15:28.552242 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 13:15:28.552336 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 13:15:28.552376 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 13:15:28.552418 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 13:15:28.552463 kernel: pci_bus 0000:08: extended config space not accessible Feb 9 13:15:28.552513 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 13:15:28.552580 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Feb 9 13:15:28.552647 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Feb 9 13:15:28.552692 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 13:15:28.552736 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 13:15:28.552781 kernel: pci 0000:08:00.0: supports D1 D2 Feb 9 13:15:28.552825 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 13:15:28.552867 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 13:15:28.552910 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 13:15:28.552955 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 13:15:28.552963 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 13:15:28.552968 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 13:15:28.552973 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 13:15:28.552978 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 13:15:28.552983 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 13:15:28.552989 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 13:15:28.552994 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 13:15:28.552999 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 13:15:28.553005 kernel: iommu: Default domain type: Translated Feb 9 13:15:28.553011 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 13:15:28.553055 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 9 13:15:28.553100 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 13:15:28.553144 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 9 13:15:28.553151 kernel: vgaarb: loaded Feb 9 13:15:28.553157 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 13:15:28.553162 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 13:15:28.553167 kernel: PTP clock support registered Feb 9 13:15:28.553174 kernel: PCI: Using ACPI for IRQ routing Feb 9 13:15:28.553179 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 13:15:28.553184 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 13:15:28.553189 kernel: e820: reserve RAM buffer [mem 0x61f6f000-0x63ffffff] Feb 9 13:15:28.553194 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Feb 9 13:15:28.553199 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Feb 9 13:15:28.553204 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Feb 9 13:15:28.553209 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 13:15:28.553214 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 9 13:15:28.553221 kernel: clocksource: Switched to clocksource tsc-early Feb 9 13:15:28.553226 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 13:15:28.553231 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 13:15:28.553237 kernel: pnp: PnP ACPI init Feb 9 13:15:28.553279 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 13:15:28.553322 kernel: pnp 00:02: [dma 0 disabled] Feb 9 13:15:28.553362 kernel: pnp 00:03: [dma 0 disabled] Feb 9 13:15:28.553404 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 13:15:28.553441 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 13:15:28.553481 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 13:15:28.553521 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 13:15:28.553583 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 13:15:28.553639 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 13:15:28.553675 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 13:15:28.553715 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 13:15:28.553752 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 13:15:28.553788 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 13:15:28.553826 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 13:15:28.553865 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 13:15:28.553903 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 13:15:28.553940 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 13:15:28.553977 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 13:15:28.554014 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 13:15:28.554049 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 13:15:28.554086 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 13:15:28.554128 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 13:15:28.554135 kernel: pnp: PnP ACPI: found 10 devices Feb 9 13:15:28.554141 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 13:15:28.554148 kernel: NET: Registered PF_INET protocol family Feb 9 13:15:28.554153 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 13:15:28.554158 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 13:15:28.554163 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 13:15:28.554169 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 13:15:28.554174 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 13:15:28.554179 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 13:15:28.554185 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 13:15:28.554191 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 13:15:28.554196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 13:15:28.554201 kernel: NET: Registered PF_XDP protocol family Feb 9 13:15:28.554243 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Feb 9 13:15:28.554284 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Feb 9 13:15:28.554326 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Feb 9 13:15:28.554368 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 13:15:28.554411 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 13:15:28.554455 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 13:15:28.554500 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 13:15:28.554543 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 13:15:28.554630 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 13:15:28.554672 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 13:15:28.554715 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 13:15:28.554757 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 13:15:28.554798 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 13:15:28.554840 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 13:15:28.554881 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 13:15:28.554922 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 13:15:28.554964 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 13:15:28.555005 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 13:15:28.555046 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 13:15:28.555090 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 13:15:28.555134 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 13:15:28.555176 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 13:15:28.555218 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 13:15:28.555259 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 13:15:28.555300 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 13:15:28.555338 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 13:15:28.555374 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 13:15:28.555413 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 13:15:28.555449 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 13:15:28.555484 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Feb 9 13:15:28.555521 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 13:15:28.555586 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Feb 9 13:15:28.555645 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 13:15:28.555688 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 9 13:15:28.555729 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Feb 9 13:15:28.555770 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 9 13:15:28.555808 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Feb 9 13:15:28.555850 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 13:15:28.555888 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 13:15:28.555928 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 9 13:15:28.555968 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 13:15:28.555976 kernel: PCI: CLS 64 bytes, default 64 Feb 9 13:15:28.555982 kernel: DMAR: No ATSR found Feb 9 13:15:28.555987 kernel: DMAR: No SATC found Feb 9 13:15:28.555992 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 9 13:15:28.555998 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 9 13:15:28.556003 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 9 13:15:28.556008 kernel: DMAR: IOMMU feature pasid inconsistent Feb 9 13:15:28.556013 kernel: DMAR: IOMMU feature eafs inconsistent Feb 9 13:15:28.556018 kernel: DMAR: IOMMU feature prs inconsistent Feb 9 13:15:28.556025 kernel: DMAR: IOMMU feature nest inconsistent Feb 9 13:15:28.556030 kernel: DMAR: IOMMU feature mts inconsistent Feb 9 13:15:28.556035 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 9 13:15:28.556040 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 9 13:15:28.556045 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 13:15:28.556051 kernel: DMAR: dmar1: Using Queued invalidation Feb 9 13:15:28.556093 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 13:15:28.556134 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 13:15:28.556176 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 9 13:15:28.556219 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 9 13:15:28.556260 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 9 13:15:28.556301 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 9 13:15:28.556341 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 9 13:15:28.556382 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 9 13:15:28.556422 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 9 13:15:28.556463 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 9 13:15:28.556503 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 9 13:15:28.556548 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 9 13:15:28.556631 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 9 13:15:28.556671 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 9 13:15:28.556713 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 9 13:15:28.556754 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 9 13:15:28.556795 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 9 13:15:28.556836 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 9 13:15:28.556878 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 9 13:15:28.556920 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 9 13:15:28.556960 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 9 13:15:28.557001 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 9 13:15:28.557042 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 9 13:15:28.557085 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 9 13:15:28.557128 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 9 13:15:28.557171 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 13:15:28.557213 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 9 13:15:28.557259 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 9 13:15:28.557302 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 9 13:15:28.557310 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 13:15:28.557315 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 13:15:28.557321 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Feb 9 13:15:28.557326 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 9 13:15:28.557331 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 13:15:28.557337 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 13:15:28.557343 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 13:15:28.557349 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 9 13:15:28.557394 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 13:15:28.557402 kernel: Initialise system trusted keyrings Feb 9 13:15:28.557407 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 13:15:28.557413 kernel: Key type asymmetric registered Feb 9 13:15:28.557418 kernel: Asymmetric key parser 'x509' registered Feb 9 13:15:28.557423 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 13:15:28.557430 kernel: io scheduler mq-deadline registered Feb 9 13:15:28.557435 kernel: io scheduler kyber registered Feb 9 13:15:28.557440 kernel: io scheduler bfq registered Feb 9 13:15:28.557480 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 9 13:15:28.557522 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 9 13:15:28.557588 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 9 13:15:28.557649 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 9 13:15:28.557691 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 9 13:15:28.557734 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 9 13:15:28.557776 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 9 13:15:28.557821 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 13:15:28.557829 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 13:15:28.557835 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 13:15:28.557840 kernel: pstore: Registered erst as persistent store backend Feb 9 13:15:28.557845 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 13:15:28.557851 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 13:15:28.557857 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 13:15:28.557862 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 13:15:28.557905 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 13:15:28.557913 kernel: i8042: PNP: No PS/2 controller found. Feb 9 13:15:28.557949 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 13:15:28.557986 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 13:15:28.558024 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T13:15:27 UTC (1707484527) Feb 9 13:15:28.558061 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 13:15:28.558070 kernel: fail to initialize ptp_kvm Feb 9 13:15:28.558075 kernel: intel_pstate: Intel P-state driver initializing Feb 9 13:15:28.558080 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 13:15:28.558085 kernel: intel_pstate: HWP enabled Feb 9 13:15:28.558091 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 13:15:28.558096 kernel: vesafb: scrolling: redraw Feb 9 13:15:28.558101 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 13:15:28.558106 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x00000000cadb0b8d, using 768k, total 768k Feb 9 13:15:28.558113 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 13:15:28.558118 kernel: fb0: VESA VGA frame buffer device Feb 9 13:15:28.558123 kernel: NET: Registered PF_INET6 protocol family Feb 9 13:15:28.558128 kernel: Segment Routing with IPv6 Feb 9 13:15:28.558133 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 13:15:28.558139 kernel: NET: Registered PF_PACKET protocol family Feb 9 13:15:28.558144 kernel: Key type dns_resolver registered Feb 9 13:15:28.558149 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 13:15:28.558154 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 13:15:28.558159 kernel: IPI shorthand broadcast: enabled Feb 9 13:15:28.558165 kernel: sched_clock: Marking stable (1838524760, 1353729887)->(4616397384, -1424142737) Feb 9 13:15:28.558171 kernel: registered taskstats version 1 Feb 9 13:15:28.558176 kernel: Loading compiled-in X.509 certificates Feb 9 13:15:28.558181 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 13:15:28.558186 kernel: Key type .fscrypt registered Feb 9 13:15:28.558191 kernel: Key type fscrypt-provisioning registered Feb 9 13:15:28.558196 kernel: pstore: Using crash dump compression: deflate Feb 9 13:15:28.558202 kernel: ima: Allocated hash algorithm: sha1 Feb 9 13:15:28.558208 kernel: ima: No architecture policies found Feb 9 13:15:28.558213 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 13:15:28.558218 kernel: Write protecting the kernel read-only data: 28672k Feb 9 13:15:28.558223 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 13:15:28.558228 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 13:15:28.558234 kernel: Run /init as init process Feb 9 13:15:28.558239 kernel: with arguments: Feb 9 13:15:28.558244 kernel: /init Feb 9 13:15:28.558249 kernel: with environment: Feb 9 13:15:28.558255 kernel: HOME=/ Feb 9 13:15:28.558260 kernel: TERM=linux Feb 9 13:15:28.558265 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 13:15:28.558272 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 13:15:28.558278 systemd[1]: Detected architecture x86-64. Feb 9 13:15:28.558284 systemd[1]: Running in initrd. Feb 9 13:15:28.558289 systemd[1]: No hostname configured, using default hostname. Feb 9 13:15:28.558294 systemd[1]: Hostname set to . Feb 9 13:15:28.558301 systemd[1]: Initializing machine ID from random generator. Feb 9 13:15:28.558306 systemd[1]: Queued start job for default target initrd.target. Feb 9 13:15:28.558311 systemd[1]: Started systemd-ask-password-console.path. Feb 9 13:15:28.558317 systemd[1]: Reached target cryptsetup.target. Feb 9 13:15:28.558322 systemd[1]: Reached target paths.target. Feb 9 13:15:28.558327 systemd[1]: Reached target slices.target. Feb 9 13:15:28.558332 systemd[1]: Reached target swap.target. Feb 9 13:15:28.558338 systemd[1]: Reached target timers.target. Feb 9 13:15:28.558344 systemd[1]: Listening on iscsid.socket. Feb 9 13:15:28.558349 systemd[1]: Listening on iscsiuio.socket. Feb 9 13:15:28.558355 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 13:15:28.558360 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 13:15:28.558366 systemd[1]: Listening on systemd-journald.socket. Feb 9 13:15:28.558371 systemd[1]: Listening on systemd-networkd.socket. Feb 9 13:15:28.558376 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 13:15:28.558382 kernel: tsc: Refined TSC clocksource calibration: 3408.046 MHz Feb 9 13:15:28.558388 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fff667c0, max_idle_ns: 440795358023 ns Feb 9 13:15:28.558393 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 13:15:28.558398 kernel: clocksource: Switched to clocksource tsc Feb 9 13:15:28.558404 systemd[1]: Reached target sockets.target. Feb 9 13:15:28.558409 systemd[1]: Starting kmod-static-nodes.service... Feb 9 13:15:28.558414 systemd[1]: Finished network-cleanup.service. Feb 9 13:15:28.558420 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 13:15:28.558425 systemd[1]: Starting systemd-journald.service... Feb 9 13:15:28.558431 systemd[1]: Starting systemd-modules-load.service... Feb 9 13:15:28.558439 systemd-journald[267]: Journal started Feb 9 13:15:28.558464 systemd-journald[267]: Runtime Journal (/run/log/journal/bdd495be73114e79afa6aff436931717) is 8.0M, max 636.8M, 628.8M free. Feb 9 13:15:28.561236 systemd-modules-load[268]: Inserted module 'overlay' Feb 9 13:15:28.567000 audit: BPF prog-id=6 op=LOAD Feb 9 13:15:28.585590 kernel: audit: type=1334 audit(1707484528.567:2): prog-id=6 op=LOAD Feb 9 13:15:28.585619 systemd[1]: Starting systemd-resolved.service... Feb 9 13:15:28.634583 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 13:15:28.634618 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 13:15:28.666579 kernel: Bridge firewalling registered Feb 9 13:15:28.666595 systemd[1]: Started systemd-journald.service. Feb 9 13:15:28.680938 systemd-modules-load[268]: Inserted module 'br_netfilter' Feb 9 13:15:28.730515 kernel: audit: type=1130 audit(1707484528.688:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.686353 systemd-resolved[270]: Positive Trust Anchors: Feb 9 13:15:28.794601 kernel: SCSI subsystem initialized Feb 9 13:15:28.794636 kernel: audit: type=1130 audit(1707484528.742:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.686361 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 13:15:28.908344 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 13:15:28.908425 kernel: device-mapper: uevent: version 1.0.3 Feb 9 13:15:28.908442 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 13:15:28.908455 kernel: audit: type=1130 audit(1707484528.865:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.686380 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 13:15:28.981796 kernel: audit: type=1130 audit(1707484528.916:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.687892 systemd-resolved[270]: Defaulting to hostname 'linux'. Feb 9 13:15:28.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.688774 systemd[1]: Started systemd-resolved.service. Feb 9 13:15:29.089630 kernel: audit: type=1130 audit(1707484528.990:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:29.089653 kernel: audit: type=1130 audit(1707484529.043:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:29.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:28.742756 systemd[1]: Finished kmod-static-nodes.service. Feb 9 13:15:28.865986 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 13:15:28.908849 systemd-modules-load[268]: Inserted module 'dm_multipath' Feb 9 13:15:28.916834 systemd[1]: Finished systemd-modules-load.service. Feb 9 13:15:28.990894 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 13:15:29.043829 systemd[1]: Reached target nss-lookup.target. Feb 9 13:15:29.098136 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 13:15:29.118074 systemd[1]: Starting systemd-sysctl.service... Feb 9 13:15:29.118374 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 13:15:29.121219 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 13:15:29.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:29.122016 systemd[1]: Finished systemd-sysctl.service. Feb 9 13:15:29.170649 kernel: audit: type=1130 audit(1707484529.120:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:29.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:29.182870 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 13:15:29.248612 kernel: audit: type=1130 audit(1707484529.182:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:29.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:29.240143 systemd[1]: Starting dracut-cmdline.service... Feb 9 13:15:29.262649 dracut-cmdline[292]: dracut-dracut-053 Feb 9 13:15:29.262649 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 13:15:29.262649 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 13:15:29.330627 kernel: Loading iSCSI transport class v2.0-870. Feb 9 13:15:29.330642 kernel: iscsi: registered transport (tcp) Feb 9 13:15:29.378774 kernel: iscsi: registered transport (qla4xxx) Feb 9 13:15:29.378823 kernel: QLogic iSCSI HBA Driver Feb 9 13:15:29.395064 systemd[1]: Finished dracut-cmdline.service. Feb 9 13:15:29.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:29.404228 systemd[1]: Starting dracut-pre-udev.service... Feb 9 13:15:29.459591 kernel: raid6: avx2x4 gen() 48700 MB/s Feb 9 13:15:29.494582 kernel: raid6: avx2x4 xor() 22152 MB/s Feb 9 13:15:29.529617 kernel: raid6: avx2x2 gen() 54845 MB/s Feb 9 13:15:29.564617 kernel: raid6: avx2x2 xor() 32753 MB/s Feb 9 13:15:29.599615 kernel: raid6: avx2x1 gen() 46148 MB/s Feb 9 13:15:29.634613 kernel: raid6: avx2x1 xor() 28476 MB/s Feb 9 13:15:29.668552 kernel: raid6: sse2x4 gen() 21771 MB/s Feb 9 13:15:29.702612 kernel: raid6: sse2x4 xor() 11984 MB/s Feb 9 13:15:29.736613 kernel: raid6: sse2x2 gen() 22100 MB/s Feb 9 13:15:29.770617 kernel: raid6: sse2x2 xor() 13631 MB/s Feb 9 13:15:29.804613 kernel: raid6: sse2x1 gen() 18659 MB/s Feb 9 13:15:29.856076 kernel: raid6: sse2x1 xor() 9102 MB/s Feb 9 13:15:29.856090 kernel: raid6: using algorithm avx2x2 gen() 54845 MB/s Feb 9 13:15:29.856098 kernel: raid6: .... xor() 32753 MB/s, rmw enabled Feb 9 13:15:29.874116 kernel: raid6: using avx2x2 recovery algorithm Feb 9 13:15:29.920553 kernel: xor: automatically using best checksumming function avx Feb 9 13:15:29.997555 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 13:15:30.002524 systemd[1]: Finished dracut-pre-udev.service. Feb 9 13:15:30.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:30.002000 audit: BPF prog-id=7 op=LOAD Feb 9 13:15:30.002000 audit: BPF prog-id=8 op=LOAD Feb 9 13:15:30.003341 systemd[1]: Starting systemd-udevd.service... Feb 9 13:15:30.010950 systemd-udevd[472]: Using default interface naming scheme 'v252'. Feb 9 13:15:30.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:30.023847 systemd[1]: Started systemd-udevd.service. Feb 9 13:15:30.063678 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Feb 9 13:15:30.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:30.039139 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 13:15:30.063551 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 13:15:30.072285 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 13:15:30.120391 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 13:15:30.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:30.147560 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 13:15:30.183721 kernel: ACPI: bus type USB registered Feb 9 13:15:30.183756 kernel: usbcore: registered new interface driver usbfs Feb 9 13:15:30.183766 kernel: usbcore: registered new interface driver hub Feb 9 13:15:30.218628 kernel: usbcore: registered new device driver usb Feb 9 13:15:30.219554 kernel: libata version 3.00 loaded. Feb 9 13:15:30.255468 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 13:15:30.255513 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 13:15:30.288554 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 13:15:30.288576 kernel: AES CTR mode by8 optimization enabled Feb 9 13:15:30.288583 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Feb 9 13:15:30.288660 kernel: pps pps0: new PPS source ptp0 Feb 9 13:15:30.288721 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 9 13:15:30.288775 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 13:15:30.288826 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:66 Feb 9 13:15:30.288875 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 9 13:15:30.288923 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 13:15:30.353553 kernel: pps pps1: new PPS source ptp1 Feb 9 13:15:30.353640 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 13:15:30.353711 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 9 13:15:30.465703 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 13:15:30.465781 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:67 Feb 9 13:15:30.495236 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 9 13:15:30.495307 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 13:15:30.515554 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 13:15:30.515656 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 9 13:15:30.528553 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 9 13:15:30.528635 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 13:15:30.556709 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 13:15:30.556781 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 13:15:30.617700 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 13:15:30.617876 kernel: scsi host0: ahci Feb 9 13:15:30.618040 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 13:15:30.618183 kernel: scsi host1: ahci Feb 9 13:15:30.633551 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 13:15:30.633667 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 13:15:30.658009 kernel: scsi host2: ahci Feb 9 13:15:30.658031 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 13:15:30.687409 kernel: scsi host3: ahci Feb 9 13:15:30.687553 kernel: hub 1-0:1.0: USB hub found Feb 9 13:15:30.704588 kernel: scsi host4: ahci Feb 9 13:15:30.704623 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 9 13:15:30.715552 kernel: hub 1-0:1.0: 16 ports detected Feb 9 13:15:30.728555 kernel: scsi host5: ahci Feb 9 13:15:30.768253 kernel: hub 2-0:1.0: USB hub found Feb 9 13:15:30.768427 kernel: scsi host6: ahci Feb 9 13:15:30.768445 kernel: hub 2-0:1.0: 10 ports detected Feb 9 13:15:30.784562 kernel: scsi host7: ahci Feb 9 13:15:30.784609 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 13:15:30.806046 kernel: usb: port power management may be unreliable Feb 9 13:15:30.806082 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 139 Feb 9 13:15:30.898523 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 139 Feb 9 13:15:30.898538 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 139 Feb 9 13:15:30.915548 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 139 Feb 9 13:15:30.932496 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 139 Feb 9 13:15:30.949302 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 139 Feb 9 13:15:30.965957 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 139 Feb 9 13:15:30.982459 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 13:15:30.993552 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 13:15:30.993580 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Feb 9 13:15:30.993656 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 13:15:31.016944 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 139 Feb 9 13:15:31.170594 kernel: hub 1-14:1.0: USB hub found Feb 9 13:15:31.170681 kernel: hub 1-14:1.0: 4 ports detected Feb 9 13:15:31.292626 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 13:15:31.326951 kernel: port_module: 9 callbacks suppressed Feb 9 13:15:31.326967 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 9 13:15:31.360568 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 13:15:31.392552 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 13:15:31.392586 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 13:15:31.412560 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 13:15:31.429582 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 13:15:31.444575 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 9 13:15:31.459583 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 13:15:31.475550 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 13:15:31.475572 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 13:15:31.506578 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 13:15:31.521549 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 13:15:31.536613 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 13:15:31.583349 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 13:15:31.583389 kernel: ata1.00: Features: NCQ-prio Feb 9 13:15:31.583398 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 13:15:31.602590 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 13:15:31.630731 kernel: ata2.00: Features: NCQ-prio Feb 9 13:15:31.631616 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 13:15:31.645638 kernel: ata1.00: configured for UDMA/133 Feb 9 13:15:31.658631 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 13:15:31.675581 kernel: ata2.00: configured for UDMA/133 Feb 9 13:15:31.689608 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 13:15:31.726552 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Feb 9 13:15:31.756259 kernel: usbcore: registered new interface driver usbhid Feb 9 13:15:31.756275 kernel: usbhid: USB HID core driver Feb 9 13:15:31.789615 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 13:15:31.804873 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:15:31.804900 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Feb 9 13:15:31.804980 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 13:15:31.835952 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 13:15:31.836040 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 13:15:31.843582 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 13:15:31.843668 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 13:15:31.843677 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 13:15:31.871235 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 13:15:31.871311 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 13:15:31.871371 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 9 13:15:31.904339 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 13:15:31.939049 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 9 13:15:31.974911 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 13:15:31.992220 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 13:15:32.079816 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:15:32.079831 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 13:15:32.136738 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 13:15:32.136756 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 13:15:32.136764 kernel: GPT:9289727 != 937703087 Feb 9 13:15:32.168357 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 13:15:32.168375 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 13:15:32.168384 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 9 13:15:32.202674 kernel: GPT:9289727 != 937703087 Feb 9 13:15:32.202691 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 13:15:32.202699 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:15:32.268798 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:15:32.268812 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 13:15:32.331732 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 13:15:32.355771 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (549) Feb 9 13:15:32.347931 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 13:15:32.365798 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 13:15:32.396674 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 13:15:32.414812 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 13:15:32.425632 systemd[1]: Starting disk-uuid.service... Feb 9 13:15:32.463592 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:15:32.463606 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:15:32.463660 disk-uuid[694]: Primary Header is updated. Feb 9 13:15:32.463660 disk-uuid[694]: Secondary Entries is updated. Feb 9 13:15:32.463660 disk-uuid[694]: Secondary Header is updated. Feb 9 13:15:32.535668 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:15:32.535679 kernel: GPT:disk_guids don't match. Feb 9 13:15:32.535686 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 13:15:32.535692 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:15:32.535698 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:15:32.575587 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:15:33.524945 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 13:15:33.543575 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 13:15:33.543591 disk-uuid[695]: The operation has completed successfully. Feb 9 13:15:33.579453 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 13:15:33.674766 kernel: audit: type=1130 audit(1707484533.586:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:33.674780 kernel: audit: type=1131 audit(1707484533.586:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:33.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:33.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:33.579498 systemd[1]: Finished disk-uuid.service. Feb 9 13:15:33.703640 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 13:15:33.589927 systemd[1]: Starting verity-setup.service... Feb 9 13:15:33.731961 systemd[1]: Found device dev-mapper-usr.device. Feb 9 13:15:33.741570 systemd[1]: Mounting sysusr-usr.mount... Feb 9 13:15:33.758791 systemd[1]: Finished verity-setup.service. Feb 9 13:15:33.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:33.811553 kernel: audit: type=1130 audit(1707484533.766:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:33.838925 systemd[1]: Mounted sysusr-usr.mount. Feb 9 13:15:33.852750 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 13:15:33.845822 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 13:15:33.932038 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 13:15:33.932053 kernel: BTRFS info (device sda6): using free space tree Feb 9 13:15:33.932060 kernel: BTRFS info (device sda6): has skinny extents Feb 9 13:15:33.932070 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 13:15:33.846216 systemd[1]: Starting ignition-setup.service... Feb 9 13:15:33.869375 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 13:15:33.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:33.941019 systemd[1]: Finished ignition-setup.service. Feb 9 13:15:34.056724 kernel: audit: type=1130 audit(1707484533.956:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.056739 kernel: audit: type=1130 audit(1707484534.010:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:33.956885 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 13:15:34.085440 kernel: audit: type=1334 audit(1707484534.064:24): prog-id=9 op=LOAD Feb 9 13:15:34.064000 audit: BPF prog-id=9 op=LOAD Feb 9 13:15:34.011179 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 13:15:34.065374 systemd[1]: Starting systemd-networkd.service... Feb 9 13:15:34.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.131192 ignition[871]: Ignition 2.14.0 Feb 9 13:15:34.165673 kernel: audit: type=1130 audit(1707484534.107:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.098914 systemd-networkd[880]: lo: Link UP Feb 9 13:15:34.131196 ignition[871]: Stage: fetch-offline Feb 9 13:15:34.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.098917 systemd-networkd[880]: lo: Gained carrier Feb 9 13:15:34.284219 kernel: audit: type=1130 audit(1707484534.179:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.284231 kernel: audit: type=1130 audit(1707484534.235:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.131221 ignition[871]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:15:34.099205 systemd-networkd[880]: Enumeration completed Feb 9 13:15:34.131235 ignition[871]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:15:34.099251 systemd[1]: Started systemd-networkd.service. Feb 9 13:15:34.139680 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:15:34.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.100000 systemd-networkd[880]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 13:15:34.354712 iscsid[901]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 13:15:34.354712 iscsid[901]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 13:15:34.354712 iscsid[901]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 13:15:34.354712 iscsid[901]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 13:15:34.354712 iscsid[901]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 13:15:34.354712 iscsid[901]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 13:15:34.354712 iscsid[901]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 13:15:34.497668 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 13:15:34.497748 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Feb 9 13:15:34.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.139744 ignition[871]: parsed url from cmdline: "" Feb 9 13:15:34.127761 systemd[1]: Reached target network.target. Feb 9 13:15:34.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:34.139746 ignition[871]: no config URL provided Feb 9 13:15:34.149731 unknown[871]: fetched base config from "system" Feb 9 13:15:34.139749 ignition[871]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 13:15:34.149735 unknown[871]: fetched user config from "system" Feb 9 13:15:34.139767 ignition[871]: parsing config with SHA512: 16bd9f5b20c81b9249eb7eceac177f6d178ca8edf6d8c1305ed65cd6c17833db6f2701dc693d126417bb6cf60bcf89a1570a2978c249d7befab3175863b4015f Feb 9 13:15:34.161155 systemd[1]: Starting iscsiuio.service... Feb 9 13:15:34.149989 ignition[871]: fetch-offline: fetch-offline passed Feb 9 13:15:34.172779 systemd[1]: Started iscsiuio.service. Feb 9 13:15:34.149992 ignition[871]: POST message to Packet Timeline Feb 9 13:15:34.179866 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 13:15:34.149996 ignition[871]: POST Status error: resource requires networking Feb 9 13:15:34.235669 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 13:15:34.150025 ignition[871]: Ignition finished successfully Feb 9 13:15:34.642659 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 13:15:34.236130 systemd[1]: Starting ignition-kargs.service... Feb 9 13:15:34.288701 ignition[891]: Ignition 2.14.0 Feb 9 13:15:34.291201 systemd[1]: Starting iscsid.service... Feb 9 13:15:34.288704 ignition[891]: Stage: kargs Feb 9 13:15:34.311823 systemd[1]: Started iscsid.service. Feb 9 13:15:34.288758 ignition[891]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:15:34.326368 systemd[1]: Starting dracut-initqueue.service... Feb 9 13:15:34.288767 ignition[891]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:15:34.342834 systemd[1]: Finished dracut-initqueue.service. Feb 9 13:15:34.290038 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:15:34.362866 systemd[1]: Reached target remote-fs-pre.target. Feb 9 13:15:34.291581 ignition[891]: kargs: kargs passed Feb 9 13:15:34.381601 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 13:15:34.291585 ignition[891]: POST message to Packet Timeline Feb 9 13:15:34.430528 systemd-networkd[880]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 13:15:34.291595 ignition[891]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:15:34.446653 systemd[1]: Reached target remote-fs.target. Feb 9 13:15:34.294804 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45157->[::1]:53: read: connection refused Feb 9 13:15:34.461232 systemd[1]: Starting dracut-pre-mount.service... Feb 9 13:15:34.495125 ignition[891]: GET https://metadata.packet.net/metadata: attempt #2 Feb 9 13:15:34.485711 systemd[1]: Finished dracut-pre-mount.service. Feb 9 13:15:34.495388 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35290->[::1]:53: read: connection refused Feb 9 13:15:34.637127 systemd-networkd[880]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 13:15:34.665863 systemd-networkd[880]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 13:15:34.694120 systemd-networkd[880]: enp2s0f1np1: Link UP Feb 9 13:15:34.694375 systemd-networkd[880]: enp2s0f1np1: Gained carrier Feb 9 13:15:34.708053 systemd-networkd[880]: enp2s0f0np0: Link UP Feb 9 13:15:34.708400 systemd-networkd[880]: eno2: Link UP Feb 9 13:15:34.708751 systemd-networkd[880]: eno1: Link UP Feb 9 13:15:34.896257 ignition[891]: GET https://metadata.packet.net/metadata: attempt #3 Feb 9 13:15:34.897564 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55893->[::1]:53: read: connection refused Feb 9 13:15:35.471875 systemd-networkd[880]: enp2s0f0np0: Gained carrier Feb 9 13:15:35.480776 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Feb 9 13:15:35.507752 systemd-networkd[880]: enp2s0f0np0: DHCPv4 address 86.109.11.101/31, gateway 86.109.11.100 acquired from 145.40.83.140 Feb 9 13:15:35.698021 ignition[891]: GET https://metadata.packet.net/metadata: attempt #4 Feb 9 13:15:35.699172 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36981->[::1]:53: read: connection refused Feb 9 13:15:35.771039 systemd-networkd[880]: enp2s0f1np1: Gained IPv6LL Feb 9 13:15:36.667016 systemd-networkd[880]: enp2s0f0np0: Gained IPv6LL Feb 9 13:15:37.300842 ignition[891]: GET https://metadata.packet.net/metadata: attempt #5 Feb 9 13:15:37.302049 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48725->[::1]:53: read: connection refused Feb 9 13:15:40.505511 ignition[891]: GET https://metadata.packet.net/metadata: attempt #6 Feb 9 13:15:40.544295 ignition[891]: GET result: OK Feb 9 13:15:40.758690 ignition[891]: Ignition finished successfully Feb 9 13:15:40.763359 systemd[1]: Finished ignition-kargs.service. Feb 9 13:15:40.844187 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 9 13:15:40.844203 kernel: audit: type=1130 audit(1707484540.773:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:40.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:40.782446 ignition[920]: Ignition 2.14.0 Feb 9 13:15:40.775851 systemd[1]: Starting ignition-disks.service... Feb 9 13:15:40.782450 ignition[920]: Stage: disks Feb 9 13:15:40.782505 ignition[920]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:15:40.782515 ignition[920]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:15:40.785092 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:15:40.785729 ignition[920]: disks: disks passed Feb 9 13:15:40.785732 ignition[920]: POST message to Packet Timeline Feb 9 13:15:40.785742 ignition[920]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:15:40.814207 ignition[920]: GET result: OK Feb 9 13:15:41.048387 ignition[920]: Ignition finished successfully Feb 9 13:15:41.051126 systemd[1]: Finished ignition-disks.service. Feb 9 13:15:41.114574 kernel: audit: type=1130 audit(1707484541.063:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.064179 systemd[1]: Reached target initrd-root-device.target. Feb 9 13:15:41.122773 systemd[1]: Reached target local-fs-pre.target. Feb 9 13:15:41.122807 systemd[1]: Reached target local-fs.target. Feb 9 13:15:41.146763 systemd[1]: Reached target sysinit.target. Feb 9 13:15:41.160757 systemd[1]: Reached target basic.target. Feb 9 13:15:41.174392 systemd[1]: Starting systemd-fsck-root.service... Feb 9 13:15:41.194064 systemd-fsck[936]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 13:15:41.206569 systemd[1]: Finished systemd-fsck-root.service. Feb 9 13:15:41.292153 kernel: audit: type=1130 audit(1707484541.214:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.292170 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 13:15:41.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.219804 systemd[1]: Mounting sysroot.mount... Feb 9 13:15:41.300157 systemd[1]: Mounted sysroot.mount. Feb 9 13:15:41.313803 systemd[1]: Reached target initrd-root-fs.target. Feb 9 13:15:41.321515 systemd[1]: Mounting sysroot-usr.mount... Feb 9 13:15:41.342491 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 13:15:41.357169 systemd[1]: Starting flatcar-static-network.service... Feb 9 13:15:41.372711 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 13:15:41.372874 systemd[1]: Reached target ignition-diskful.target. Feb 9 13:15:41.391324 systemd[1]: Mounted sysroot-usr.mount. Feb 9 13:15:41.414451 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 13:15:41.482650 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (948) Feb 9 13:15:41.482670 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 13:15:41.427255 systemd[1]: Starting initrd-setup-root.service... Feb 9 13:15:41.553953 kernel: BTRFS info (device sda6): using free space tree Feb 9 13:15:41.553967 kernel: BTRFS info (device sda6): has skinny extents Feb 9 13:15:41.553975 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 13:15:41.487164 systemd[1]: Finished initrd-setup-root.service. Feb 9 13:15:41.615722 kernel: audit: type=1130 audit(1707484541.562:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.615767 coreos-metadata[945]: Feb 09 13:15:41.493 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 13:15:41.615767 coreos-metadata[945]: Feb 09 13:15:41.516 INFO Fetch successful Feb 9 13:15:41.799513 kernel: audit: type=1130 audit(1707484541.623:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.799524 kernel: audit: type=1130 audit(1707484541.686:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.799534 kernel: audit: type=1131 audit(1707484541.686:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.799595 coreos-metadata[944]: Feb 09 13:15:41.493 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 13:15:41.799595 coreos-metadata[944]: Feb 09 13:15:41.530 INFO Fetch successful Feb 9 13:15:41.799595 coreos-metadata[944]: Feb 09 13:15:41.549 INFO wrote hostname ci-3510.3.2-a-f9072dee11 to /sysroot/etc/hostname Feb 9 13:15:41.847629 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 13:15:41.563895 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 13:15:41.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.901772 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Feb 9 13:15:41.940724 kernel: audit: type=1130 audit(1707484541.872:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.623845 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 9 13:15:41.951856 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 13:15:41.623883 systemd[1]: Finished flatcar-static-network.service. Feb 9 13:15:41.972021 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 13:15:41.686812 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 13:15:41.989871 ignition[1023]: INFO : Ignition 2.14.0 Feb 9 13:15:41.989871 ignition[1023]: INFO : Stage: mount Feb 9 13:15:41.989871 ignition[1023]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:15:41.989871 ignition[1023]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:15:41.989871 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:15:41.989871 ignition[1023]: INFO : mount: mount passed Feb 9 13:15:41.989871 ignition[1023]: INFO : POST message to Packet Timeline Feb 9 13:15:41.989871 ignition[1023]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:15:41.989871 ignition[1023]: INFO : GET result: OK Feb 9 13:15:41.808193 systemd[1]: Starting ignition-mount.service... Feb 9 13:15:42.086908 ignition[1023]: INFO : Ignition finished successfully Feb 9 13:15:42.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.836161 systemd[1]: Starting sysroot-boot.service... Feb 9 13:15:42.174712 kernel: audit: type=1130 audit(1707484542.094:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:41.855386 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 13:15:41.855427 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 13:15:42.270631 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1037) Feb 9 13:15:42.270642 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 13:15:42.270649 kernel: BTRFS info (device sda6): using free space tree Feb 9 13:15:42.270656 kernel: BTRFS info (device sda6): has skinny extents Feb 9 13:15:42.270662 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 13:15:41.856111 systemd[1]: Finished sysroot-boot.service. Feb 9 13:15:42.081284 systemd[1]: Finished ignition-mount.service. Feb 9 13:15:42.096674 systemd[1]: Starting ignition-files.service... Feb 9 13:15:42.168528 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 13:15:42.302486 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 13:15:42.347775 ignition[1056]: INFO : Ignition 2.14.0 Feb 9 13:15:42.347775 ignition[1056]: INFO : Stage: files Feb 9 13:15:42.347775 ignition[1056]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:15:42.347775 ignition[1056]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:15:42.347775 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:15:42.347775 ignition[1056]: DEBUG : files: compiled without relabeling support, skipping Feb 9 13:15:42.347775 ignition[1056]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 13:15:42.347775 ignition[1056]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 13:15:42.347775 ignition[1056]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 13:15:42.347775 ignition[1056]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 13:15:42.347775 ignition[1056]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 13:15:42.347775 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 13:15:42.347775 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 13:15:42.334738 unknown[1056]: wrote ssh authorized keys file for user: core Feb 9 13:15:42.827505 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 13:15:42.910638 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 13:15:42.910638 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 13:15:42.953771 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 13:15:42.953771 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 13:15:43.319326 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 13:15:43.397247 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 13:15:43.422796 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 13:15:43.422796 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 13:15:43.422796 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 13:15:43.528982 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 13:15:43.897207 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 13:15:43.922788 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 13:15:43.922788 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 13:15:43.922788 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 13:15:43.970745 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 13:15:44.668700 ignition[1056]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 13:15:44.668700 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 13:15:44.718755 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1078) Feb 9 13:15:44.705602 systemd[1]: mnt-oem4086754609.mount: Deactivated successfully. Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4086754609" Feb 9 13:15:44.727780 ignition[1056]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4086754609": device or resource busy Feb 9 13:15:44.727780 ignition[1056]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4086754609", trying btrfs: device or resource busy Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4086754609" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4086754609" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem4086754609" Feb 9 13:15:44.727780 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem4086754609" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(e): [started] processing unit "packet-phone-home.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(e): [finished] processing unit "packet-phone-home.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 13:15:44.981891 ignition[1056]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 13:15:45.376783 kernel: audit: type=1130 audit(1707484545.087:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.078803 systemd[1]: Finished ignition-files.service. Feb 9 13:15:45.390748 ignition[1056]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 13:15:45.390748 ignition[1056]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 13:15:45.390748 ignition[1056]: INFO : files: createResultFile: createFiles: op(18): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 13:15:45.390748 ignition[1056]: INFO : files: createResultFile: createFiles: op(18): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 13:15:45.390748 ignition[1056]: INFO : files: files passed Feb 9 13:15:45.390748 ignition[1056]: INFO : POST message to Packet Timeline Feb 9 13:15:45.390748 ignition[1056]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:15:45.390748 ignition[1056]: INFO : GET result: OK Feb 9 13:15:45.390748 ignition[1056]: INFO : Ignition finished successfully Feb 9 13:15:45.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.093472 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 13:15:45.581016 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 13:15:45.154760 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 13:15:45.155077 systemd[1]: Starting ignition-quench.service... Feb 9 13:15:45.184810 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 13:15:45.211985 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 13:15:45.212066 systemd[1]: Finished ignition-quench.service. Feb 9 13:15:45.239075 systemd[1]: Reached target ignition-complete.target. Feb 9 13:15:45.262891 systemd[1]: Starting initrd-parse-etc.service... Feb 9 13:15:45.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.306676 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 13:15:45.306905 systemd[1]: Finished initrd-parse-etc.service. Feb 9 13:15:45.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.318047 systemd[1]: Reached target initrd-fs.target. Feb 9 13:15:45.855742 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 9 13:15:45.855758 kernel: audit: type=1131 audit(1707484545.774:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.339784 systemd[1]: Reached target initrd.target. Feb 9 13:15:45.361953 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 13:15:45.363941 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 13:15:45.383856 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 13:15:45.391200 systemd[1]: Starting initrd-cleanup.service... Feb 9 13:15:45.425515 systemd[1]: Stopped target nss-lookup.target. Feb 9 13:15:45.441121 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 13:15:45.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.465200 systemd[1]: Stopped target timers.target. Feb 9 13:15:46.077653 kernel: audit: type=1131 audit(1707484545.951:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.077670 kernel: audit: type=1131 audit(1707484546.019:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.488127 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 13:15:46.146911 kernel: audit: type=1131 audit(1707484546.086:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.488484 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 13:15:46.160807 ignition[1107]: INFO : Ignition 2.14.0 Feb 9 13:15:46.160807 ignition[1107]: INFO : Stage: umount Feb 9 13:15:46.160807 ignition[1107]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 13:15:46.160807 ignition[1107]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 13:15:46.160807 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 13:15:46.160807 ignition[1107]: INFO : umount: umount passed Feb 9 13:15:46.160807 ignition[1107]: INFO : POST message to Packet Timeline Feb 9 13:15:46.160807 ignition[1107]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 13:15:46.160807 ignition[1107]: INFO : GET result: OK Feb 9 13:15:46.606075 kernel: audit: type=1131 audit(1707484546.188:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.606092 kernel: audit: type=1131 audit(1707484546.281:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.606100 kernel: audit: type=1131 audit(1707484546.348:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.606107 kernel: audit: type=1131 audit(1707484546.415:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.606114 kernel: audit: type=1131 audit(1707484546.481:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.606121 kernel: audit: type=1131 audit(1707484546.547:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.504590 systemd[1]: Stopped target initrd.target. Feb 9 13:15:46.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.622833 ignition[1107]: INFO : Ignition finished successfully Feb 9 13:15:45.520144 systemd[1]: Stopped target basic.target. Feb 9 13:15:45.537145 systemd[1]: Stopped target ignition-complete.target. Feb 9 13:15:46.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.553293 systemd[1]: Stopped target ignition-diskful.target. Feb 9 13:15:46.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.571267 systemd[1]: Stopped target initrd-root-device.target. Feb 9 13:15:46.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.590171 systemd[1]: Stopped target remote-fs.target. Feb 9 13:15:46.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.613137 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 13:15:45.639301 systemd[1]: Stopped target sysinit.target. Feb 9 13:15:45.655287 systemd[1]: Stopped target local-fs.target. Feb 9 13:15:46.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.673278 systemd[1]: Stopped target local-fs-pre.target. Feb 9 13:15:46.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.770000 audit: BPF prog-id=6 op=UNLOAD Feb 9 13:15:45.691272 systemd[1]: Stopped target swap.target. Feb 9 13:15:45.705159 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 13:15:45.705526 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 13:15:46.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.723493 systemd[1]: Stopped target cryptsetup.target. Feb 9 13:15:46.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.741186 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 13:15:46.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.741562 systemd[1]: Stopped dracut-initqueue.service. Feb 9 13:15:45.758290 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 13:15:46.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.758673 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 13:15:45.775348 systemd[1]: Stopped target paths.target. Feb 9 13:15:45.862777 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 13:15:45.868744 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 13:15:46.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.888888 systemd[1]: Stopped target slices.target. Feb 9 13:15:46.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.904851 systemd[1]: Stopped target sockets.target. Feb 9 13:15:46.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.920980 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 13:15:45.921058 systemd[1]: Closed iscsid.socket. Feb 9 13:15:46.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.935099 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 13:15:47.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:47.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:45.935260 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 13:15:45.952198 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 13:15:45.952573 systemd[1]: Stopped ignition-files.service. Feb 9 13:15:46.019929 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 13:15:46.020068 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 13:15:46.087701 systemd[1]: Stopping ignition-mount.service... Feb 9 13:15:46.153910 systemd[1]: Stopping iscsiuio.service... Feb 9 13:15:46.167797 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 13:15:46.167866 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 13:15:46.189656 systemd[1]: Stopping sysroot-boot.service... Feb 9 13:15:46.254739 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 13:15:46.254808 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 13:15:46.281803 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 13:15:46.281917 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 13:15:46.351289 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 13:15:46.352030 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 13:15:47.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:46.352110 systemd[1]: Stopped iscsiuio.service. Feb 9 13:15:46.416203 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 13:15:46.416289 systemd[1]: Stopped ignition-mount.service. Feb 9 13:15:46.482098 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 13:15:46.482187 systemd[1]: Stopped sysroot-boot.service. Feb 9 13:15:47.236591 iscsid[901]: iscsid shutting down. Feb 9 13:15:46.548818 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 13:15:46.548905 systemd[1]: Finished initrd-cleanup.service. Feb 9 13:15:46.614815 systemd[1]: Stopped target network.target. Feb 9 13:15:46.630765 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 13:15:46.630782 systemd[1]: Closed iscsiuio.socket. Feb 9 13:15:46.644762 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 13:15:46.644785 systemd[1]: Stopped ignition-disks.service. Feb 9 13:15:46.657728 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 13:15:46.657753 systemd[1]: Stopped ignition-kargs.service. Feb 9 13:15:46.674807 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 13:15:46.674862 systemd[1]: Stopped ignition-setup.service. Feb 9 13:15:46.690830 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 13:15:46.690904 systemd[1]: Stopped initrd-setup-root.service. Feb 9 13:15:46.709333 systemd[1]: Stopping systemd-networkd.service... Feb 9 13:15:46.718713 systemd-networkd[880]: enp2s0f1np1: DHCPv6 lease lost Feb 9 13:15:46.722990 systemd[1]: Stopping systemd-resolved.service... Feb 9 13:15:46.726774 systemd-networkd[880]: enp2s0f0np0: DHCPv6 lease lost Feb 9 13:15:46.740389 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 13:15:47.236000 audit: BPF prog-id=9 op=UNLOAD Feb 9 13:15:46.740620 systemd[1]: Stopped systemd-resolved.service. Feb 9 13:15:46.756171 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 13:15:46.756483 systemd[1]: Stopped systemd-networkd.service. Feb 9 13:15:46.769860 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 13:15:46.769879 systemd[1]: Closed systemd-networkd.socket. Feb 9 13:15:46.787219 systemd[1]: Stopping network-cleanup.service... Feb 9 13:15:46.800781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 13:15:46.800915 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 13:15:46.816954 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 13:15:46.817089 systemd[1]: Stopped systemd-sysctl.service. Feb 9 13:15:46.832263 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 13:15:46.832393 systemd[1]: Stopped systemd-modules-load.service. Feb 9 13:15:46.847146 systemd[1]: Stopping systemd-udevd.service... Feb 9 13:15:46.865244 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 13:15:46.866535 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 13:15:46.866842 systemd[1]: Stopped systemd-udevd.service. Feb 9 13:15:46.880183 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 13:15:46.880292 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 13:15:46.895889 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 13:15:46.895985 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 13:15:46.911802 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 13:15:46.911953 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 13:15:46.934975 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 13:15:46.935021 systemd[1]: Stopped dracut-cmdline.service. Feb 9 13:15:46.950726 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 13:15:46.950849 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 13:15:46.967189 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 13:15:46.980715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 13:15:46.980746 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 13:15:46.997910 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 13:15:46.997967 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 13:15:47.237585 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Feb 9 13:15:47.145603 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 13:15:47.145814 systemd[1]: Stopped network-cleanup.service. Feb 9 13:15:47.157120 systemd[1]: Reached target initrd-switch-root.target. Feb 9 13:15:47.175453 systemd[1]: Starting initrd-switch-root.service... Feb 9 13:15:47.192718 systemd[1]: Switching root. Feb 9 13:15:47.237713 systemd-journald[267]: Journal stopped Feb 9 13:15:51.233724 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 13:15:51.233738 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 13:15:51.233746 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 13:15:51.233751 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 13:15:51.233756 kernel: SELinux: policy capability open_perms=1 Feb 9 13:15:51.233761 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 13:15:51.233767 kernel: SELinux: policy capability always_check_network=0 Feb 9 13:15:51.233773 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 13:15:51.233778 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 13:15:51.233784 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 13:15:51.233790 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 13:15:51.233796 systemd[1]: Successfully loaded SELinux policy in 326.513ms. Feb 9 13:15:51.233802 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.430ms. Feb 9 13:15:51.233809 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 13:15:51.233817 systemd[1]: Detected architecture x86-64. Feb 9 13:15:51.233822 systemd[1]: Detected first boot. Feb 9 13:15:51.233828 systemd[1]: Hostname set to . Feb 9 13:15:51.233834 systemd[1]: Initializing machine ID from random generator. Feb 9 13:15:51.233840 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 13:15:51.233846 systemd[1]: Populated /etc with preset unit settings. Feb 9 13:15:51.233852 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 13:15:51.233859 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 13:15:51.233866 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 13:15:51.233873 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 13:15:51.233878 systemd[1]: Stopped iscsid.service. Feb 9 13:15:51.233884 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 13:15:51.233890 systemd[1]: Stopped initrd-switch-root.service. Feb 9 13:15:51.233897 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 13:15:51.233904 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 13:15:51.233910 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 13:15:51.233916 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 13:15:51.233922 systemd[1]: Created slice system-getty.slice. Feb 9 13:15:51.233928 systemd[1]: Created slice system-modprobe.slice. Feb 9 13:15:51.233934 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 13:15:51.233940 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 13:15:51.233946 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 13:15:51.233953 systemd[1]: Created slice user.slice. Feb 9 13:15:51.233959 systemd[1]: Started systemd-ask-password-console.path. Feb 9 13:15:51.233965 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 13:15:51.233971 systemd[1]: Set up automount boot.automount. Feb 9 13:15:51.233979 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 13:15:51.233985 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 13:15:51.233991 systemd[1]: Stopped target initrd-fs.target. Feb 9 13:15:51.233998 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 13:15:51.234005 systemd[1]: Reached target integritysetup.target. Feb 9 13:15:51.234011 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 13:15:51.234017 systemd[1]: Reached target remote-fs.target. Feb 9 13:15:51.234024 systemd[1]: Reached target slices.target. Feb 9 13:15:51.234030 systemd[1]: Reached target swap.target. Feb 9 13:15:51.234036 systemd[1]: Reached target torcx.target. Feb 9 13:15:51.234042 systemd[1]: Reached target veritysetup.target. Feb 9 13:15:51.234049 systemd[1]: Listening on systemd-coredump.socket. Feb 9 13:15:51.234056 systemd[1]: Listening on systemd-initctl.socket. Feb 9 13:15:51.234063 systemd[1]: Listening on systemd-networkd.socket. Feb 9 13:15:51.234069 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 13:15:51.234075 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 13:15:51.234082 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 13:15:51.234088 systemd[1]: Mounting dev-hugepages.mount... Feb 9 13:15:51.234095 systemd[1]: Mounting dev-mqueue.mount... Feb 9 13:15:51.234102 systemd[1]: Mounting media.mount... Feb 9 13:15:51.234108 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 13:15:51.234115 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 13:15:51.234121 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 13:15:51.234128 systemd[1]: Mounting tmp.mount... Feb 9 13:15:51.234134 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 13:15:51.234141 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 13:15:51.234148 systemd[1]: Starting kmod-static-nodes.service... Feb 9 13:15:51.234154 systemd[1]: Starting modprobe@configfs.service... Feb 9 13:15:51.234161 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 13:15:51.234167 systemd[1]: Starting modprobe@drm.service... Feb 9 13:15:51.234173 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 13:15:51.234180 systemd[1]: Starting modprobe@fuse.service... Feb 9 13:15:51.234186 kernel: fuse: init (API version 7.34) Feb 9 13:15:51.234192 systemd[1]: Starting modprobe@loop.service... Feb 9 13:15:51.234198 kernel: loop: module loaded Feb 9 13:15:51.234205 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 13:15:51.234212 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 13:15:51.234218 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 13:15:51.234225 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 13:15:51.234231 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 9 13:15:51.234237 kernel: audit: type=1131 audit(1707484550.875:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.234243 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 13:15:51.234250 kernel: audit: type=1131 audit(1707484550.963:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.234257 systemd[1]: Stopped systemd-journald.service. Feb 9 13:15:51.234263 kernel: audit: type=1130 audit(1707484551.027:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.234269 kernel: audit: type=1131 audit(1707484551.027:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.234275 kernel: audit: type=1334 audit(1707484551.112:106): prog-id=15 op=LOAD Feb 9 13:15:51.234280 kernel: audit: type=1334 audit(1707484551.130:107): prog-id=16 op=LOAD Feb 9 13:15:51.234286 kernel: audit: type=1334 audit(1707484551.149:108): prog-id=17 op=LOAD Feb 9 13:15:51.234292 kernel: audit: type=1334 audit(1707484551.167:109): prog-id=13 op=UNLOAD Feb 9 13:15:51.234299 systemd[1]: Starting systemd-journald.service... Feb 9 13:15:51.234306 kernel: audit: type=1334 audit(1707484551.167:110): prog-id=14 op=UNLOAD Feb 9 13:15:51.234311 systemd[1]: Starting systemd-modules-load.service... Feb 9 13:15:51.234318 kernel: audit: type=1305 audit(1707484551.231:111): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 13:15:51.234325 systemd-journald[1257]: Journal started Feb 9 13:15:51.234349 systemd-journald[1257]: Runtime Journal (/run/log/journal/2757bafda82042f1985510519ee85d0f) is 8.0M, max 636.8M, 628.8M free. Feb 9 13:15:47.681000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 13:15:47.974000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 13:15:47.976000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 13:15:47.976000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 13:15:47.976000 audit: BPF prog-id=10 op=LOAD Feb 9 13:15:47.976000 audit: BPF prog-id=10 op=UNLOAD Feb 9 13:15:47.976000 audit: BPF prog-id=11 op=LOAD Feb 9 13:15:47.976000 audit: BPF prog-id=11 op=UNLOAD Feb 9 13:15:48.044000 audit[1148]: AVC avc: denied { associate } for pid=1148 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 13:15:48.044000 audit[1148]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001278e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1131 pid=1148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:15:48.044000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 13:15:48.071000 audit[1148]: AVC avc: denied { associate } for pid=1148 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 13:15:48.071000 audit[1148]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001279b9 a2=1ed a3=0 items=2 ppid=1131 pid=1148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:15:48.071000 audit: CWD cwd="/" Feb 9 13:15:48.071000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:48.071000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:48.071000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 13:15:49.585000 audit: BPF prog-id=12 op=LOAD Feb 9 13:15:49.585000 audit: BPF prog-id=3 op=UNLOAD Feb 9 13:15:49.585000 audit: BPF prog-id=13 op=LOAD Feb 9 13:15:49.586000 audit: BPF prog-id=14 op=LOAD Feb 9 13:15:49.586000 audit: BPF prog-id=4 op=UNLOAD Feb 9 13:15:49.586000 audit: BPF prog-id=5 op=UNLOAD Feb 9 13:15:49.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:49.633000 audit: BPF prog-id=12 op=UNLOAD Feb 9 13:15:49.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:49.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:49.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:50.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:50.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.112000 audit: BPF prog-id=15 op=LOAD Feb 9 13:15:51.130000 audit: BPF prog-id=16 op=LOAD Feb 9 13:15:51.149000 audit: BPF prog-id=17 op=LOAD Feb 9 13:15:51.167000 audit: BPF prog-id=13 op=UNLOAD Feb 9 13:15:51.167000 audit: BPF prog-id=14 op=UNLOAD Feb 9 13:15:51.231000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 13:15:49.584207 systemd[1]: Queued start job for default target multi-user.target. Feb 9 13:15:48.043705 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 13:15:49.587043 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 13:15:48.044106 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 13:15:48.044118 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 13:15:48.044136 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 13:15:48.044143 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 13:15:48.044160 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 13:15:48.044168 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 13:15:48.044286 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 13:15:48.044307 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 13:15:48.044314 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 13:15:48.044737 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 13:15:48.044758 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 13:15:48.044770 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 13:15:48.044779 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 13:15:48.044789 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 13:15:48.044797 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 13:15:49.237180 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:49Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 13:15:49.237329 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:49Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 13:15:49.237384 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:49Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 13:15:49.237475 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:49Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 13:15:49.237506 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:49Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 13:15:49.237543 /usr/lib/systemd/system-generators/torcx-generator[1148]: time="2024-02-09T13:15:49Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 13:15:51.231000 audit[1257]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe60732ce0 a2=4000 a3=7ffe60732d7c items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:15:51.231000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 13:15:51.311773 systemd[1]: Starting systemd-network-generator.service... Feb 9 13:15:51.338552 systemd[1]: Starting systemd-remount-fs.service... Feb 9 13:15:51.365630 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 13:15:51.408334 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 13:15:51.408358 systemd[1]: Stopped verity-setup.service. Feb 9 13:15:51.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.453584 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 13:15:51.473711 systemd[1]: Started systemd-journald.service. Feb 9 13:15:51.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.482164 systemd[1]: Mounted dev-hugepages.mount. Feb 9 13:15:51.489790 systemd[1]: Mounted dev-mqueue.mount. Feb 9 13:15:51.496794 systemd[1]: Mounted media.mount. Feb 9 13:15:51.503802 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 13:15:51.512785 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 13:15:51.520769 systemd[1]: Mounted tmp.mount. Feb 9 13:15:51.527869 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 13:15:51.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.536895 systemd[1]: Finished kmod-static-nodes.service. Feb 9 13:15:51.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.545933 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 13:15:51.546059 systemd[1]: Finished modprobe@configfs.service. Feb 9 13:15:51.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.555035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 13:15:51.555191 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 13:15:51.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.564108 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 13:15:51.564303 systemd[1]: Finished modprobe@drm.service. Feb 9 13:15:51.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.573400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 13:15:51.573848 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 13:15:51.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.583483 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 13:15:51.583806 systemd[1]: Finished modprobe@fuse.service. Feb 9 13:15:51.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.592344 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 13:15:51.592670 systemd[1]: Finished modprobe@loop.service. Feb 9 13:15:51.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.601382 systemd[1]: Finished systemd-modules-load.service. Feb 9 13:15:51.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.610352 systemd[1]: Finished systemd-network-generator.service. Feb 9 13:15:51.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.620330 systemd[1]: Finished systemd-remount-fs.service. Feb 9 13:15:51.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.629332 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 13:15:51.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.639810 systemd[1]: Reached target network-pre.target. Feb 9 13:15:51.651319 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 13:15:51.660245 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 13:15:51.667773 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 13:15:51.668671 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 13:15:51.676217 systemd[1]: Starting systemd-journal-flush.service... Feb 9 13:15:51.679967 systemd-journald[1257]: Time spent on flushing to /var/log/journal/2757bafda82042f1985510519ee85d0f is 15.425ms for 1624 entries. Feb 9 13:15:51.679967 systemd-journald[1257]: System Journal (/var/log/journal/2757bafda82042f1985510519ee85d0f) is 8.0M, max 195.6M, 187.6M free. Feb 9 13:15:51.727138 systemd-journald[1257]: Received client request to flush runtime journal. Feb 9 13:15:51.692697 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 13:15:51.693170 systemd[1]: Starting systemd-random-seed.service... Feb 9 13:15:51.708693 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 13:15:51.709195 systemd[1]: Starting systemd-sysctl.service... Feb 9 13:15:51.717354 systemd[1]: Starting systemd-sysusers.service... Feb 9 13:15:51.725152 systemd[1]: Starting systemd-udev-settle.service... Feb 9 13:15:51.733972 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 13:15:51.741723 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 13:15:51.749773 systemd[1]: Finished systemd-journal-flush.service. Feb 9 13:15:51.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.757754 systemd[1]: Finished systemd-random-seed.service. Feb 9 13:15:51.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.765768 systemd[1]: Finished systemd-sysctl.service. Feb 9 13:15:51.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.773752 systemd[1]: Finished systemd-sysusers.service. Feb 9 13:15:51.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.782747 systemd[1]: Reached target first-boot-complete.target. Feb 9 13:15:51.790874 udevadm[1273]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 13:15:51.972495 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 13:15:51.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:51.981000 audit: BPF prog-id=18 op=LOAD Feb 9 13:15:51.981000 audit: BPF prog-id=19 op=LOAD Feb 9 13:15:51.982000 audit: BPF prog-id=7 op=UNLOAD Feb 9 13:15:51.982000 audit: BPF prog-id=8 op=UNLOAD Feb 9 13:15:51.982873 systemd[1]: Starting systemd-udevd.service... Feb 9 13:15:51.994530 systemd-udevd[1274]: Using default interface naming scheme 'v252'. Feb 9 13:15:52.013917 systemd[1]: Started systemd-udevd.service. Feb 9 13:15:52.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:52.023648 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 9 13:15:52.023000 audit: BPF prog-id=20 op=LOAD Feb 9 13:15:52.025008 systemd[1]: Starting systemd-networkd.service... Feb 9 13:15:52.047000 audit: BPF prog-id=21 op=LOAD Feb 9 13:15:52.065681 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 13:15:52.065764 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 13:15:52.065000 audit: BPF prog-id=22 op=LOAD Feb 9 13:15:52.065000 audit: BPF prog-id=23 op=LOAD Feb 9 13:15:52.066317 systemd[1]: Starting systemd-userdbd.service... Feb 9 13:15:52.086851 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 13:15:52.086894 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1330) Feb 9 13:15:52.111589 kernel: ACPI: button: Power Button [PWRF] Feb 9 13:15:52.129554 kernel: IPMI message handler: version 39.2 Feb 9 13:15:52.148238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 13:15:52.182007 systemd[1]: Started systemd-userdbd.service. Feb 9 13:15:52.069000 audit[1352]: AVC avc: denied { confidentiality } for pid=1352 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 13:15:52.069000 audit[1352]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560c91dc70e0 a1=4d8bc a2=7fda9f24dbc5 a3=5 items=42 ppid=1274 pid=1352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:15:52.069000 audit: CWD cwd="/" Feb 9 13:15:52.069000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=1 name=(null) inode=8607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=2 name=(null) inode=8607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=3 name=(null) inode=8608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=4 name=(null) inode=8607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=5 name=(null) inode=8609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=6 name=(null) inode=8607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=7 name=(null) inode=8610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=8 name=(null) inode=8610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=9 name=(null) inode=8611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=10 name=(null) inode=8610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=11 name=(null) inode=8612 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=12 name=(null) inode=8610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=13 name=(null) inode=8613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=14 name=(null) inode=8610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=15 name=(null) inode=8614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=16 name=(null) inode=8610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=17 name=(null) inode=8615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=18 name=(null) inode=8607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=19 name=(null) inode=8616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=20 name=(null) inode=8616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=21 name=(null) inode=8617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=22 name=(null) inode=8616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=23 name=(null) inode=8618 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=24 name=(null) inode=8616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=25 name=(null) inode=8619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=26 name=(null) inode=8616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=27 name=(null) inode=8620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=28 name=(null) inode=8616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=29 name=(null) inode=8621 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=30 name=(null) inode=8607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=31 name=(null) inode=8622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=32 name=(null) inode=8622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=33 name=(null) inode=8623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=34 name=(null) inode=8622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=35 name=(null) inode=8624 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=36 name=(null) inode=8622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=37 name=(null) inode=8625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=38 name=(null) inode=8622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=39 name=(null) inode=8626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=40 name=(null) inode=8622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PATH item=41 name=(null) inode=8627 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:15:52.069000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 13:15:52.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:52.206565 kernel: ipmi device interface Feb 9 13:15:52.226556 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 13:15:52.246553 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 13:15:52.246722 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 13:15:52.246843 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 13:15:52.246956 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 13:15:52.247037 kernel: ipmi_si: IPMI System Interface driver Feb 9 13:15:52.247055 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 13:15:52.247120 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 13:15:52.247133 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 13:15:52.247144 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 13:15:52.247207 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 13:15:52.247552 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 13:15:52.502846 systemd-networkd[1320]: bond0: netdev ready Feb 9 13:15:52.504295 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 13:15:52.504406 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 13:15:52.504426 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 13:15:52.504954 systemd-networkd[1320]: lo: Link UP Feb 9 13:15:52.504958 systemd-networkd[1320]: lo: Gained carrier Feb 9 13:15:52.505497 systemd-networkd[1320]: Enumeration completed Feb 9 13:15:52.505560 systemd[1]: Started systemd-networkd.service. Feb 9 13:15:52.505827 systemd-networkd[1320]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 13:15:52.506479 systemd-networkd[1320]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a1:e9.network. Feb 9 13:15:52.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:52.547553 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 13:15:52.574553 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 9 13:15:52.574655 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 13:15:52.653596 kernel: intel_rapl_common: Found RAPL domain package Feb 9 13:15:52.653638 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 9 13:15:52.653720 kernel: intel_rapl_common: Found RAPL domain core Feb 9 13:15:52.653734 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 13:15:52.655433 systemd-networkd[1320]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a1:e8.network. Feb 9 13:15:52.655552 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 9 13:15:52.718588 kernel: intel_rapl_common: Found RAPL domain uncore Feb 9 13:15:52.718623 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 13:15:52.718637 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 13:15:52.726405 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 13:15:52.816560 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 13:15:52.816621 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 13:15:52.855612 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 9 13:15:52.855636 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 13:15:52.876553 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 13:15:52.905842 systemd-networkd[1320]: bond0: Link UP Feb 9 13:15:52.906812 systemd-networkd[1320]: enp2s0f1np1: Link UP Feb 9 13:15:52.907440 systemd-networkd[1320]: enp2s0f1np1: Gained carrier Feb 9 13:15:52.912068 systemd-networkd[1320]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:7e:a1:e8.network. Feb 9 13:15:52.942553 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:52.946782 systemd[1]: Finished systemd-udev-settle.service. Feb 9 13:15:52.963552 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:52.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:52.978272 systemd[1]: Starting lvm2-activation-early.service... Feb 9 13:15:52.984563 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:52.984584 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 13:15:52.984595 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 13:15:52.999434 lvm[1380]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 13:15:53.044590 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.065597 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.085589 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.105594 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.106984 systemd[1]: Finished lvm2-activation-early.service. Feb 9 13:15:53.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:53.123640 systemd[1]: Reached target cryptsetup.target. Feb 9 13:15:53.126586 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.142162 systemd[1]: Starting lvm2-activation.service... Feb 9 13:15:53.144313 lvm[1381]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 13:15:53.146552 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.165552 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.184560 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.203550 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.222551 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.230045 systemd[1]: Finished lvm2-activation.service. Feb 9 13:15:53.240551 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:53.256633 systemd[1]: Reached target local-fs-pre.target. Feb 9 13:15:53.258551 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.274596 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 13:15:53.274618 systemd[1]: Reached target local-fs.target. Feb 9 13:15:53.276549 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.292605 systemd[1]: Reached target machines.target. Feb 9 13:15:53.295550 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.312249 systemd[1]: Starting ldconfig.service... Feb 9 13:15:53.313551 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.328329 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 13:15:53.328350 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 13:15:53.328885 systemd[1]: Starting systemd-boot-update.service... Feb 9 13:15:53.332600 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.347072 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 13:15:53.351550 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.369351 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 13:15:53.369428 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 13:15:53.369449 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 13:15:53.369550 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.370073 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 13:15:53.370333 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1383 (bootctl) Feb 9 13:15:53.370965 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 13:15:53.386610 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.403618 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.421619 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:53.421713 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 13:15:53.437580 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.452595 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.454023 systemd-networkd[1320]: enp2s0f0np0: Link UP Feb 9 13:15:53.454772 systemd-networkd[1320]: bond0: Gained carrier Feb 9 13:15:53.455200 systemd-networkd[1320]: enp2s0f0np0: Gained carrier Feb 9 13:15:53.488951 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 9 13:15:53.489007 kernel: bond0: (slave enp2s0f1np1): link status definitely down, disabling slave Feb 9 13:15:53.514930 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 13:15:53.514960 kernel: bond0: active interface up! Feb 9 13:15:53.526187 systemd-networkd[1320]: enp2s0f1np1: Link DOWN Feb 9 13:15:53.526194 systemd-networkd[1320]: enp2s0f1np1: Lost carrier Feb 9 13:15:53.581657 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 13:15:53.694587 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 13:15:53.697395 systemd-networkd[1320]: enp2s0f1np1: Link UP Feb 9 13:15:53.697583 systemd-networkd[1320]: enp2s0f1np1: Gained carrier Feb 9 13:15:53.745171 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 13:15:53.748139 kernel: bond0: (slave enp2s0f1np1): link status up, enabling it in 200 ms Feb 9 13:15:53.748170 kernel: bond0: (slave enp2s0f1np1): invalid new link 3 on slave Feb 9 13:15:53.758393 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 13:15:53.758735 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 13:15:53.758814 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 13:15:53.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:53.790655 systemd-fsck[1391]: fsck.fat 4.2 (2021-01-31) Feb 9 13:15:53.790655 systemd-fsck[1391]: /dev/sda1: 789 files, 115332/258078 clusters Feb 9 13:15:53.791351 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 13:15:53.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:53.802653 systemd[1]: Mounting boot.mount... Feb 9 13:15:53.814165 systemd[1]: Mounted boot.mount. Feb 9 13:15:53.832290 systemd[1]: Finished systemd-boot-update.service. Feb 9 13:15:53.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:53.863116 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 13:15:53.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:15:53.872391 systemd[1]: Starting audit-rules.service... Feb 9 13:15:53.879259 systemd[1]: Starting clean-ca-certificates.service... Feb 9 13:15:53.888188 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 13:15:53.891000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 13:15:53.891000 audit[1412]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda40dc8f0 a2=420 a3=0 items=0 ppid=1395 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:15:53.891000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 13:15:53.891750 augenrules[1412]: No rules Feb 9 13:15:53.897600 systemd[1]: Starting systemd-resolved.service... Feb 9 13:15:53.905500 systemd[1]: Starting systemd-timesyncd.service... Feb 9 13:15:53.914055 systemd[1]: Starting systemd-update-utmp.service... Feb 9 13:15:53.921839 systemd[1]: Finished audit-rules.service. Feb 9 13:15:53.929696 systemd[1]: Finished clean-ca-certificates.service. Feb 9 13:15:53.938691 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 13:15:53.951193 systemd[1]: Finished systemd-update-utmp.service. Feb 9 13:15:53.953595 ldconfig[1382]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 13:15:53.965811 systemd[1]: Finished ldconfig.service. Feb 9 13:15:53.971552 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 13:15:53.978254 systemd[1]: Starting systemd-update-done.service... Feb 9 13:15:53.982888 systemd-resolved[1417]: Positive Trust Anchors: Feb 9 13:15:53.982893 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 13:15:53.982912 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 13:15:53.984642 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 13:15:53.984768 systemd[1]: Started systemd-timesyncd.service. Feb 9 13:15:53.986956 systemd-resolved[1417]: Using system hostname 'ci-3510.3.2-a-f9072dee11'. Feb 9 13:15:53.993740 systemd[1]: Started systemd-resolved.service. Feb 9 13:15:54.001764 systemd[1]: Finished systemd-update-done.service. Feb 9 13:15:54.009690 systemd[1]: Reached target network.target. Feb 9 13:15:54.017621 systemd[1]: Reached target nss-lookup.target. Feb 9 13:15:54.025626 systemd[1]: Reached target sysinit.target. Feb 9 13:15:54.033628 systemd[1]: Started motdgen.path. Feb 9 13:15:54.040606 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 13:15:54.049625 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 13:15:54.057617 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 13:15:54.057639 systemd[1]: Reached target paths.target. Feb 9 13:15:54.064621 systemd[1]: Reached target time-set.target. Feb 9 13:15:54.072682 systemd[1]: Started logrotate.timer. Feb 9 13:15:54.079659 systemd[1]: Started mdadm.timer. Feb 9 13:15:54.086614 systemd[1]: Reached target timers.target. Feb 9 13:15:54.093737 systemd[1]: Listening on dbus.socket. Feb 9 13:15:54.101123 systemd[1]: Starting docker.socket... Feb 9 13:15:54.109021 systemd[1]: Listening on sshd.socket. Feb 9 13:15:54.115691 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 13:15:54.115922 systemd[1]: Listening on docker.socket. Feb 9 13:15:54.122680 systemd[1]: Reached target sockets.target. Feb 9 13:15:54.130630 systemd[1]: Reached target basic.target. Feb 9 13:15:54.137655 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 13:15:54.137674 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 13:15:54.138144 systemd[1]: Starting containerd.service... Feb 9 13:15:54.145048 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 13:15:54.154079 systemd[1]: Starting coreos-metadata.service... Feb 9 13:15:54.161203 systemd[1]: Starting dbus.service... Feb 9 13:15:54.167347 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 13:15:54.172104 jq[1432]: false Feb 9 13:15:54.174321 systemd[1]: Starting extend-filesystems.service... Feb 9 13:15:54.174759 coreos-metadata[1425]: Feb 09 13:15:54.174 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 13:15:54.180347 dbus-daemon[1431]: [system] SELinux support is enabled Feb 9 13:15:54.180638 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 13:15:54.181300 systemd[1]: Starting motdgen.service... Feb 9 13:15:54.182532 extend-filesystems[1433]: Found sda Feb 9 13:15:54.189249 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 13:15:54.202743 coreos-metadata[1428]: Feb 09 13:15:54.184 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 13:15:54.202864 extend-filesystems[1433]: Found sda1 Feb 9 13:15:54.202864 extend-filesystems[1433]: Found sda2 Feb 9 13:15:54.202864 extend-filesystems[1433]: Found sda3 Feb 9 13:15:54.202864 extend-filesystems[1433]: Found usr Feb 9 13:15:54.202864 extend-filesystems[1433]: Found sda4 Feb 9 13:15:54.202864 extend-filesystems[1433]: Found sda6 Feb 9 13:15:54.202864 extend-filesystems[1433]: Found sda7 Feb 9 13:15:54.202864 extend-filesystems[1433]: Found sda9 Feb 9 13:15:54.202864 extend-filesystems[1433]: Checking size of /dev/sda9 Feb 9 13:15:54.202864 extend-filesystems[1433]: Resized partition /dev/sda9 Feb 9 13:15:54.318632 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 9 13:15:54.318694 extend-filesystems[1449]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 13:15:54.220302 systemd[1]: Starting prepare-critools.service... Feb 9 13:15:54.235132 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 13:15:54.250085 systemd[1]: Starting sshd-keygen.service... Feb 9 13:15:54.264781 systemd[1]: Starting systemd-logind.service... Feb 9 13:15:54.279584 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 13:15:54.280100 systemd[1]: Starting tcsd.service... Feb 9 13:15:54.341048 update_engine[1463]: I0209 13:15:54.339694 1463 main.cc:92] Flatcar Update Engine starting Feb 9 13:15:54.285665 systemd-logind[1461]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 13:15:54.341248 jq[1464]: true Feb 9 13:15:54.285674 systemd-logind[1461]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 13:15:54.285682 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 13:15:54.285871 systemd-logind[1461]: New seat seat0. Feb 9 13:15:54.292859 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 13:15:54.293204 systemd[1]: Starting update-engine.service... Feb 9 13:15:54.311208 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 13:15:54.332885 systemd[1]: Started dbus.service. Feb 9 13:15:54.343059 update_engine[1463]: I0209 13:15:54.343019 1463 update_check_scheduler.cc:74] Next update check in 5m46s Feb 9 13:15:54.349305 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 13:15:54.349404 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 13:15:54.349577 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 13:15:54.349656 systemd[1]: Finished motdgen.service. Feb 9 13:15:54.357311 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 13:15:54.357411 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 13:15:54.362218 tar[1466]: ./ Feb 9 13:15:54.362218 tar[1466]: ./macvlan Feb 9 13:15:54.368194 jq[1470]: true Feb 9 13:15:54.368683 dbus-daemon[1431]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 13:15:54.369708 tar[1467]: crictl Feb 9 13:15:54.374454 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 13:15:54.374544 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 13:15:54.375677 systemd[1]: Started update-engine.service. Feb 9 13:15:54.378068 env[1471]: time="2024-02-09T13:15:54.378045434Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 13:15:54.384162 tar[1466]: ./static Feb 9 13:15:54.386209 env[1471]: time="2024-02-09T13:15:54.386182598Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 13:15:54.386714 env[1471]: time="2024-02-09T13:15:54.386675421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 13:15:54.387236 env[1471]: time="2024-02-09T13:15:54.387221149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 13:15:54.387236 env[1471]: time="2024-02-09T13:15:54.387235192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 13:15:54.387719 systemd[1]: Started systemd-logind.service. Feb 9 13:15:54.388945 env[1471]: time="2024-02-09T13:15:54.388919819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 13:15:54.388945 env[1471]: time="2024-02-09T13:15:54.388934168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 13:15:54.389002 env[1471]: time="2024-02-09T13:15:54.388946102Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 13:15:54.389002 env[1471]: time="2024-02-09T13:15:54.388951977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 13:15:54.389002 env[1471]: time="2024-02-09T13:15:54.388992639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 13:15:54.391070 env[1471]: time="2024-02-09T13:15:54.391034292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 13:15:54.391125 env[1471]: time="2024-02-09T13:15:54.391114155Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 13:15:54.391147 env[1471]: time="2024-02-09T13:15:54.391125105Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 13:15:54.391175 env[1471]: time="2024-02-09T13:15:54.391152508Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 13:15:54.391175 env[1471]: time="2024-02-09T13:15:54.391160800Z" level=info msg="metadata content store policy set" policy=shared Feb 9 13:15:54.395302 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Feb 9 13:15:54.395754 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 13:15:54.399729 env[1471]: time="2024-02-09T13:15:54.399716512Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 13:15:54.399773 env[1471]: time="2024-02-09T13:15:54.399733940Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 13:15:54.399773 env[1471]: time="2024-02-09T13:15:54.399747314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 13:15:54.399773 env[1471]: time="2024-02-09T13:15:54.399766429Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 13:15:54.399845 env[1471]: time="2024-02-09T13:15:54.399775677Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 13:15:54.399845 env[1471]: time="2024-02-09T13:15:54.399783798Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 13:15:54.399845 env[1471]: time="2024-02-09T13:15:54.399790983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 13:15:54.399845 env[1471]: time="2024-02-09T13:15:54.399801110Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 13:15:54.399845 env[1471]: time="2024-02-09T13:15:54.399808482Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 13:15:54.399845 env[1471]: time="2024-02-09T13:15:54.399815811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 13:15:54.399845 env[1471]: time="2024-02-09T13:15:54.399822065Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 13:15:54.399845 env[1471]: time="2024-02-09T13:15:54.399828259Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 13:15:54.399975 env[1471]: time="2024-02-09T13:15:54.399871960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 13:15:54.399975 env[1471]: time="2024-02-09T13:15:54.399918881Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 13:15:54.400054 env[1471]: time="2024-02-09T13:15:54.400045542Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 13:15:54.400087 env[1471]: time="2024-02-09T13:15:54.400060827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400087 env[1471]: time="2024-02-09T13:15:54.400068244Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 13:15:54.400119 env[1471]: time="2024-02-09T13:15:54.400095662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400119 env[1471]: time="2024-02-09T13:15:54.400103384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400119 env[1471]: time="2024-02-09T13:15:54.400110164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400119 env[1471]: time="2024-02-09T13:15:54.400116363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400183 env[1471]: time="2024-02-09T13:15:54.400122708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400183 env[1471]: time="2024-02-09T13:15:54.400129016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400183 env[1471]: time="2024-02-09T13:15:54.400134772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400183 env[1471]: time="2024-02-09T13:15:54.400140691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400183 env[1471]: time="2024-02-09T13:15:54.400147705Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 13:15:54.400270 env[1471]: time="2024-02-09T13:15:54.400214973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400270 env[1471]: time="2024-02-09T13:15:54.400223978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400270 env[1471]: time="2024-02-09T13:15:54.400230381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400270 env[1471]: time="2024-02-09T13:15:54.400236672Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 13:15:54.400270 env[1471]: time="2024-02-09T13:15:54.400247212Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 13:15:54.400270 env[1471]: time="2024-02-09T13:15:54.400253848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 13:15:54.400270 env[1471]: time="2024-02-09T13:15:54.400263211Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 13:15:54.400384 env[1471]: time="2024-02-09T13:15:54.400286327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 13:15:54.400424 env[1471]: time="2024-02-09T13:15:54.400398936Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400434818Z" level=info msg="Connect containerd service" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400454368Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400734976Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400818762Z" level=info msg="Start subscribing containerd event" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400853134Z" level=info msg="Start recovering state" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400876567Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400885124Z" level=info msg="Start event monitor" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400891980Z" level=info msg="Start snapshots syncer" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400897852Z" level=info msg="Start cni network conf syncer for default" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400898588Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.400901979Z" level=info msg="Start streaming server" Feb 9 13:15:54.403252 env[1471]: time="2024-02-09T13:15:54.401094268Z" level=info msg="containerd successfully booted in 0.023214s" Feb 9 13:15:54.405657 systemd[1]: Started containerd.service. Feb 9 13:15:54.406253 tar[1466]: ./vlan Feb 9 13:15:54.414364 systemd[1]: Started locksmithd.service. Feb 9 13:15:54.420704 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 13:15:54.420792 systemd[1]: Reached target system-config.target. Feb 9 13:15:54.427221 tar[1466]: ./portmap Feb 9 13:15:54.428633 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 13:15:54.428701 systemd[1]: Reached target user-config.target. Feb 9 13:15:54.447148 tar[1466]: ./host-local Feb 9 13:15:54.464728 tar[1466]: ./vrf Feb 9 13:15:54.472635 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 13:15:54.483785 tar[1466]: ./bridge Feb 9 13:15:54.506628 tar[1466]: ./tuning Feb 9 13:15:54.524529 tar[1466]: ./firewall Feb 9 13:15:54.548067 tar[1466]: ./host-device Feb 9 13:15:54.568540 tar[1466]: ./sbr Feb 9 13:15:54.587188 tar[1466]: ./loopback Feb 9 13:15:54.604830 tar[1466]: ./dhcp Feb 9 13:15:54.621815 systemd[1]: Finished prepare-critools.service. Feb 9 13:15:54.655024 tar[1466]: ./ptp Feb 9 13:15:54.676372 tar[1466]: ./ipvlan Feb 9 13:15:54.687551 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 9 13:15:54.714143 extend-filesystems[1449]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 9 13:15:54.714143 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 13:15:54.714143 extend-filesystems[1449]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 9 13:15:54.714656 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 13:15:54.753002 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 13:15:54.753175 extend-filesystems[1433]: Resized filesystem in /dev/sda9 Feb 9 13:15:54.753175 extend-filesystems[1433]: Found sdb Feb 9 13:15:54.781665 tar[1466]: ./bandwidth Feb 9 13:15:54.714740 systemd[1]: Finished extend-filesystems.service. Feb 9 13:15:54.738060 systemd[1]: Finished sshd-keygen.service. Feb 9 13:15:54.764061 systemd[1]: Starting issuegen.service... Feb 9 13:15:54.775425 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 13:15:54.775505 systemd[1]: Finished issuegen.service. Feb 9 13:15:54.778929 systemd-networkd[1320]: bond0: Gained IPv6LL Feb 9 13:15:54.789039 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 13:15:54.798460 systemd[1]: Starting systemd-user-sessions.service... Feb 9 13:15:54.807979 systemd[1]: Finished systemd-user-sessions.service. Feb 9 13:15:54.817759 systemd[1]: Started getty@tty1.service. Feb 9 13:15:54.827101 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 13:15:54.837084 systemd[1]: Reached target getty.target. Feb 9 13:15:54.877584 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 13:15:59.862273 login[1529]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 13:15:59.862707 login[1528]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 13:15:59.870033 systemd[1]: Created slice user-500.slice. Feb 9 13:15:59.870571 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 13:15:59.871497 systemd-logind[1461]: New session 2 of user core. Feb 9 13:15:59.875790 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 13:15:59.876443 systemd[1]: Starting user@500.service... Feb 9 13:15:59.878609 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:15:59.947774 systemd[1533]: Queued start job for default target default.target. Feb 9 13:15:59.947999 systemd[1533]: Reached target paths.target. Feb 9 13:15:59.948011 systemd[1533]: Reached target sockets.target. Feb 9 13:15:59.948019 systemd[1533]: Reached target timers.target. Feb 9 13:15:59.948025 systemd[1533]: Reached target basic.target. Feb 9 13:15:59.948044 systemd[1533]: Reached target default.target. Feb 9 13:15:59.948058 systemd[1533]: Startup finished in 66ms. Feb 9 13:15:59.948108 systemd[1]: Started user@500.service. Feb 9 13:15:59.948645 systemd[1]: Started session-2.scope. Feb 9 13:16:00.155945 coreos-metadata[1425]: Feb 09 13:16:00.155 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 13:16:00.156687 coreos-metadata[1428]: Feb 09 13:16:00.155 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 13:16:00.867935 login[1529]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 13:16:00.870916 systemd-logind[1461]: New session 1 of user core. Feb 9 13:16:00.871410 systemd[1]: Started session-1.scope. Feb 9 13:16:01.056725 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 9 13:16:01.056874 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 9 13:16:01.156347 coreos-metadata[1428]: Feb 09 13:16:01.156 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 13:16:01.156610 coreos-metadata[1425]: Feb 09 13:16:01.156 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 13:16:01.205146 coreos-metadata[1425]: Feb 09 13:16:01.205 INFO Fetch successful Feb 9 13:16:01.224543 systemd[1]: Created slice system-sshd.slice. Feb 9 13:16:01.225272 systemd[1]: Started sshd@0-86.109.11.101:22-147.75.109.163:35792.service. Feb 9 13:16:01.232407 unknown[1425]: wrote ssh authorized keys file for user: core Feb 9 13:16:01.243413 update-ssh-keys[1555]: Updated "/home/core/.ssh/authorized_keys" Feb 9 13:16:01.243649 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 13:16:01.265053 sshd[1554]: Accepted publickey for core from 147.75.109.163 port 35792 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 13:16:01.267983 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:16:01.277481 systemd-logind[1461]: New session 3 of user core. Feb 9 13:16:01.279617 systemd[1]: Started session-3.scope. Feb 9 13:16:01.345529 systemd[1]: Started sshd@1-86.109.11.101:22-147.75.109.163:35794.service. Feb 9 13:16:01.370857 sshd[1560]: Accepted publickey for core from 147.75.109.163 port 35794 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 13:16:01.371612 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:16:01.373999 systemd-logind[1461]: New session 4 of user core. Feb 9 13:16:01.374471 systemd[1]: Started session-4.scope. Feb 9 13:16:01.418471 coreos-metadata[1428]: Feb 09 13:16:01.418 INFO Fetch successful Feb 9 13:16:01.427544 sshd[1560]: pam_unix(sshd:session): session closed for user core Feb 9 13:16:01.429072 systemd[1]: sshd@1-86.109.11.101:22-147.75.109.163:35794.service: Deactivated successfully. Feb 9 13:16:01.429407 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 13:16:01.429810 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Feb 9 13:16:01.430268 systemd[1]: Started sshd@2-86.109.11.101:22-147.75.109.163:35806.service. Feb 9 13:16:01.430748 systemd-logind[1461]: Removed session 4. Feb 9 13:16:01.444369 systemd[1]: Finished coreos-metadata.service. Feb 9 13:16:01.445124 systemd[1]: Started packet-phone-home.service. Feb 9 13:16:01.445234 systemd[1]: Reached target multi-user.target. Feb 9 13:16:01.445838 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 13:16:01.449842 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 13:16:01.449916 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 13:16:01.450071 systemd[1]: Startup finished in 2.003s (kernel) + 19.504s (initrd) + 14.118s (userspace) = 35.626s. Feb 9 13:16:01.450221 curl[1570]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 13:16:01.450221 curl[1570]: Dload Upload Total Spent Left Speed Feb 9 13:16:01.455516 sshd[1566]: Accepted publickey for core from 147.75.109.163 port 35806 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 13:16:01.456275 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:16:01.458773 systemd-logind[1461]: New session 5 of user core. Feb 9 13:16:01.459139 systemd[1]: Started session-5.scope. Feb 9 13:16:01.512858 sshd[1566]: pam_unix(sshd:session): session closed for user core Feb 9 13:16:01.516088 systemd[1]: sshd@2-86.109.11.101:22-147.75.109.163:35806.service: Deactivated successfully. Feb 9 13:16:01.517133 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 13:16:01.518232 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Feb 9 13:16:01.520071 systemd-logind[1461]: Removed session 5. Feb 9 13:16:01.656105 curl[1570]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 13:16:01.658496 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 13:16:01.668364 systemd-timesyncd[1418]: Contacted time server 108.61.73.244:123 (0.flatcar.pool.ntp.org). Feb 9 13:16:01.668585 systemd-timesyncd[1418]: Initial clock synchronization to Fri 2024-02-09 13:16:02.027153 UTC. Feb 9 13:16:11.776864 systemd[1]: Started sshd@3-86.109.11.101:22-147.75.109.163:52606.service. Feb 9 13:16:11.803168 sshd[1576]: Accepted publickey for core from 147.75.109.163 port 52606 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 13:16:11.804042 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:16:11.807194 systemd-logind[1461]: New session 6 of user core. Feb 9 13:16:11.807798 systemd[1]: Started session-6.scope. Feb 9 13:16:11.863404 sshd[1576]: pam_unix(sshd:session): session closed for user core Feb 9 13:16:11.865052 systemd[1]: sshd@3-86.109.11.101:22-147.75.109.163:52606.service: Deactivated successfully. Feb 9 13:16:11.865361 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 13:16:11.865722 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Feb 9 13:16:11.866195 systemd[1]: Started sshd@4-86.109.11.101:22-147.75.109.163:52608.service. Feb 9 13:16:11.866523 systemd-logind[1461]: Removed session 6. Feb 9 13:16:11.892911 sshd[1582]: Accepted publickey for core from 147.75.109.163 port 52608 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 13:16:11.895796 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:16:11.905340 systemd-logind[1461]: New session 7 of user core. Feb 9 13:16:11.907460 systemd[1]: Started session-7.scope. Feb 9 13:16:11.973514 sshd[1582]: pam_unix(sshd:session): session closed for user core Feb 9 13:16:11.975148 systemd[1]: sshd@4-86.109.11.101:22-147.75.109.163:52608.service: Deactivated successfully. Feb 9 13:16:11.975439 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 13:16:11.975862 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Feb 9 13:16:11.976282 systemd[1]: Started sshd@5-86.109.11.101:22-147.75.109.163:52620.service. Feb 9 13:16:11.976645 systemd-logind[1461]: Removed session 7. Feb 9 13:16:12.002718 sshd[1588]: Accepted publickey for core from 147.75.109.163 port 52620 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 13:16:12.003591 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:16:12.006767 systemd-logind[1461]: New session 8 of user core. Feb 9 13:16:12.007476 systemd[1]: Started session-8.scope. Feb 9 13:16:12.074030 sshd[1588]: pam_unix(sshd:session): session closed for user core Feb 9 13:16:12.080841 systemd[1]: sshd@5-86.109.11.101:22-147.75.109.163:52620.service: Deactivated successfully. Feb 9 13:16:12.082403 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 13:16:12.084134 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Feb 9 13:16:12.086553 systemd[1]: Started sshd@6-86.109.11.101:22-147.75.109.163:52630.service. Feb 9 13:16:12.089038 systemd-logind[1461]: Removed session 8. Feb 9 13:16:12.137998 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 52630 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 13:16:12.138685 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:16:12.140749 systemd-logind[1461]: New session 9 of user core. Feb 9 13:16:12.141197 systemd[1]: Started session-9.scope. Feb 9 13:16:12.206133 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 13:16:12.206630 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 13:16:12.226994 dbus-daemon[1431]: \xd0\xcdz\x99\u001cV: received setenforce notice (enforcing=-1040393856) Feb 9 13:16:12.231474 sudo[1597]: pam_unix(sudo:session): session closed for user root Feb 9 13:16:12.235986 sshd[1594]: pam_unix(sshd:session): session closed for user core Feb 9 13:16:12.242844 systemd[1]: sshd@6-86.109.11.101:22-147.75.109.163:52630.service: Deactivated successfully. Feb 9 13:16:12.244321 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 13:16:12.245976 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Feb 9 13:16:12.248282 systemd[1]: Started sshd@7-86.109.11.101:22-147.75.109.163:52636.service. Feb 9 13:16:12.250439 systemd-logind[1461]: Removed session 9. Feb 9 13:16:12.309204 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 52636 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 13:16:12.312276 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:16:12.321743 systemd-logind[1461]: New session 10 of user core. Feb 9 13:16:12.323979 systemd[1]: Started session-10.scope. Feb 9 13:16:12.400588 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 13:16:12.401180 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 13:16:12.408364 sudo[1605]: pam_unix(sudo:session): session closed for user root Feb 9 13:16:12.416007 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 13:16:12.416113 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 13:16:12.421124 systemd[1]: Stopping audit-rules.service... Feb 9 13:16:12.421000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 13:16:12.422027 auditctl[1608]: No rules Feb 9 13:16:12.422194 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 13:16:12.422270 systemd[1]: Stopped audit-rules.service. Feb 9 13:16:12.422978 systemd[1]: Starting audit-rules.service... Feb 9 13:16:12.427379 kernel: kauditd_printk_skb: 95 callbacks suppressed Feb 9 13:16:12.427449 kernel: audit: type=1305 audit(1707484572.421:158): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 13:16:12.432467 augenrules[1625]: No rules Feb 9 13:16:12.432752 systemd[1]: Finished audit-rules.service. Feb 9 13:16:12.433172 sudo[1604]: pam_unix(sudo:session): session closed for user root Feb 9 13:16:12.433976 sshd[1601]: pam_unix(sshd:session): session closed for user core Feb 9 13:16:12.435477 systemd[1]: sshd@7-86.109.11.101:22-147.75.109.163:52636.service: Deactivated successfully. Feb 9 13:16:12.435805 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 13:16:12.436208 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. Feb 9 13:16:12.436753 systemd[1]: Started sshd@8-86.109.11.101:22-147.75.109.163:52650.service. Feb 9 13:16:12.437209 systemd-logind[1461]: Removed session 10. Feb 9 13:16:12.421000 audit[1608]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe068af420 a2=420 a3=0 items=0 ppid=1 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:12.463161 sshd[1631]: Accepted publickey for core from 147.75.109.163 port 52650 ssh2: RSA SHA256:64VUfRXiMosPxVXfALumiHZVs3BYorCRVSgPBbg6OcI Feb 9 13:16:12.463890 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 13:16:12.466114 systemd-logind[1461]: New session 11 of user core. Feb 9 13:16:12.466504 systemd[1]: Started session-11.scope. Feb 9 13:16:12.474486 kernel: audit: type=1300 audit(1707484572.421:158): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe068af420 a2=420 a3=0 items=0 ppid=1 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:12.474538 kernel: audit: type=1327 audit(1707484572.421:158): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 13:16:12.421000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 13:16:12.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.506861 kernel: audit: type=1131 audit(1707484572.421:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.506895 kernel: audit: type=1130 audit(1707484572.432:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.514064 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 13:16:12.514173 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 13:16:12.432000 audit[1604]: USER_END pid=1604 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.555949 kernel: audit: type=1106 audit(1707484572.432:161): pid=1604 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.555995 kernel: audit: type=1104 audit(1707484572.432:162): pid=1604 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.432000 audit[1604]: CRED_DISP pid=1604 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.433000 audit[1601]: USER_END pid=1601 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:12.612548 kernel: audit: type=1106 audit(1707484572.433:163): pid=1601 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:12.612588 kernel: audit: type=1104 audit(1707484572.434:164): pid=1601 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:12.434000 audit[1601]: CRED_DISP pid=1601 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:12.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-86.109.11.101:22-147.75.109.163:52636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.664638 kernel: audit: type=1131 audit(1707484572.435:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-86.109.11.101:22-147.75.109.163:52636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-86.109.11.101:22-147.75.109.163:52650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.461000 audit[1631]: USER_ACCT pid=1631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:12.462000 audit[1631]: CRED_ACQ pid=1631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:12.462000 audit[1631]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff736ac5b0 a2=3 a3=0 items=0 ppid=1 pid=1631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:12.462000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 13:16:12.467000 audit[1631]: USER_START pid=1631 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:12.467000 audit[1633]: CRED_ACQ pid=1633 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:12.513000 audit[1634]: USER_ACCT pid=1634 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.513000 audit[1634]: CRED_REFR pid=1634 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:12.514000 audit[1634]: USER_START pid=1634 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:16.539070 systemd[1]: Reloading. Feb 9 13:16:16.570453 /usr/lib/systemd/system-generators/torcx-generator[1665]: time="2024-02-09T13:16:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 13:16:16.570481 /usr/lib/systemd/system-generators/torcx-generator[1665]: time="2024-02-09T13:16:16Z" level=info msg="torcx already run" Feb 9 13:16:16.624471 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 13:16:16.624480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 13:16:16.637214 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit: BPF prog-id=31 op=LOAD Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.682000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.682000 audit: BPF prog-id=32 op=LOAD Feb 9 13:16:16.682000 audit: BPF prog-id=18 op=UNLOAD Feb 9 13:16:16.682000 audit: BPF prog-id=19 op=UNLOAD Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit: BPF prog-id=33 op=LOAD Feb 9 13:16:16.683000 audit: BPF prog-id=29 op=UNLOAD Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.683000 audit: BPF prog-id=34 op=LOAD Feb 9 13:16:16.683000 audit: BPF prog-id=20 op=UNLOAD Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit: BPF prog-id=35 op=LOAD Feb 9 13:16:16.684000 audit: BPF prog-id=26 op=UNLOAD Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit: BPF prog-id=36 op=LOAD Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.684000 audit: BPF prog-id=37 op=LOAD Feb 9 13:16:16.684000 audit: BPF prog-id=27 op=UNLOAD Feb 9 13:16:16.684000 audit: BPF prog-id=28 op=UNLOAD Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit: BPF prog-id=38 op=LOAD Feb 9 13:16:16.685000 audit: BPF prog-id=24 op=UNLOAD Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.685000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit: BPF prog-id=39 op=LOAD Feb 9 13:16:16.686000 audit: BPF prog-id=15 op=UNLOAD Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit: BPF prog-id=40 op=LOAD Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit: BPF prog-id=41 op=LOAD Feb 9 13:16:16.686000 audit: BPF prog-id=16 op=UNLOAD Feb 9 13:16:16.686000 audit: BPF prog-id=17 op=UNLOAD Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit: BPF prog-id=42 op=LOAD Feb 9 13:16:16.686000 audit: BPF prog-id=21 op=UNLOAD Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit: BPF prog-id=43 op=LOAD Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.686000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.687000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.687000 audit: BPF prog-id=44 op=LOAD Feb 9 13:16:16.687000 audit: BPF prog-id=22 op=UNLOAD Feb 9 13:16:16.687000 audit: BPF prog-id=23 op=UNLOAD Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:16.688000 audit: BPF prog-id=45 op=LOAD Feb 9 13:16:16.688000 audit: BPF prog-id=25 op=UNLOAD Feb 9 13:16:16.695417 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 13:16:16.699227 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 13:16:16.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:16.699593 systemd[1]: Reached target network-online.target. Feb 9 13:16:16.700326 systemd[1]: Started kubelet.service. Feb 9 13:16:16.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:16.723607 kubelet[1723]: E0209 13:16:16.723550 1723 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 13:16:16.724792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 13:16:16.724860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 13:16:16.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 13:16:17.141771 systemd[1]: Stopped kubelet.service. Feb 9 13:16:17.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:17.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:17.172414 systemd[1]: Reloading. Feb 9 13:16:17.204161 /usr/lib/systemd/system-generators/torcx-generator[1826]: time="2024-02-09T13:16:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 13:16:17.204177 /usr/lib/systemd/system-generators/torcx-generator[1826]: time="2024-02-09T13:16:17Z" level=info msg="torcx already run" Feb 9 13:16:17.263116 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 13:16:17.263127 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 13:16:17.278005 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit: BPF prog-id=46 op=LOAD Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.323000 audit: BPF prog-id=47 op=LOAD Feb 9 13:16:17.323000 audit: BPF prog-id=31 op=UNLOAD Feb 9 13:16:17.323000 audit: BPF prog-id=32 op=UNLOAD Feb 9 13:16:17.324000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.324000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.324000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.324000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.324000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.324000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.324000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.324000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.324000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit: BPF prog-id=48 op=LOAD Feb 9 13:16:17.325000 audit: BPF prog-id=33 op=UNLOAD Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit: BPF prog-id=49 op=LOAD Feb 9 13:16:17.325000 audit: BPF prog-id=34 op=UNLOAD Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.325000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit: BPF prog-id=50 op=LOAD Feb 9 13:16:17.326000 audit: BPF prog-id=35 op=UNLOAD Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit: BPF prog-id=51 op=LOAD Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.326000 audit: BPF prog-id=52 op=LOAD Feb 9 13:16:17.326000 audit: BPF prog-id=36 op=UNLOAD Feb 9 13:16:17.326000 audit: BPF prog-id=37 op=UNLOAD Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit: BPF prog-id=53 op=LOAD Feb 9 13:16:17.327000 audit: BPF prog-id=38 op=UNLOAD Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit: BPF prog-id=54 op=LOAD Feb 9 13:16:17.327000 audit: BPF prog-id=39 op=UNLOAD Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit: BPF prog-id=55 op=LOAD Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.327000 audit: BPF prog-id=56 op=LOAD Feb 9 13:16:17.327000 audit: BPF prog-id=40 op=UNLOAD Feb 9 13:16:17.327000 audit: BPF prog-id=41 op=UNLOAD Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit: BPF prog-id=57 op=LOAD Feb 9 13:16:17.328000 audit: BPF prog-id=42 op=UNLOAD Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit: BPF prog-id=58 op=LOAD Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.328000 audit: BPF prog-id=59 op=LOAD Feb 9 13:16:17.328000 audit: BPF prog-id=43 op=UNLOAD Feb 9 13:16:17.328000 audit: BPF prog-id=44 op=UNLOAD Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.329000 audit: BPF prog-id=60 op=LOAD Feb 9 13:16:17.329000 audit: BPF prog-id=45 op=UNLOAD Feb 9 13:16:17.338266 systemd[1]: Started kubelet.service. Feb 9 13:16:17.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:17.359906 kubelet[1884]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 13:16:17.359906 kubelet[1884]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 13:16:17.360112 kubelet[1884]: I0209 13:16:17.359914 1884 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 13:16:17.360693 kubelet[1884]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 13:16:17.360693 kubelet[1884]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 13:16:17.552724 kubelet[1884]: I0209 13:16:17.552644 1884 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 13:16:17.552724 kubelet[1884]: I0209 13:16:17.552654 1884 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 13:16:17.552788 kubelet[1884]: I0209 13:16:17.552779 1884 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 13:16:17.554186 kubelet[1884]: I0209 13:16:17.554146 1884 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 13:16:17.572045 kubelet[1884]: I0209 13:16:17.572034 1884 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 13:16:17.572159 kubelet[1884]: I0209 13:16:17.572123 1884 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 13:16:17.572200 kubelet[1884]: I0209 13:16:17.572162 1884 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 13:16:17.572200 kubelet[1884]: I0209 13:16:17.572173 1884 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 13:16:17.572200 kubelet[1884]: I0209 13:16:17.572180 1884 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 13:16:17.572315 kubelet[1884]: I0209 13:16:17.572231 1884 state_mem.go:36] "Initialized new in-memory state store" Feb 9 13:16:17.573519 kubelet[1884]: I0209 13:16:17.573511 1884 kubelet.go:398] "Attempting to sync node with API server" Feb 9 13:16:17.573564 kubelet[1884]: I0209 13:16:17.573522 1884 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 13:16:17.573564 kubelet[1884]: I0209 13:16:17.573535 1884 kubelet.go:297] "Adding apiserver pod source" Feb 9 13:16:17.573564 kubelet[1884]: I0209 13:16:17.573543 1884 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 13:16:17.573651 kubelet[1884]: E0209 13:16:17.573584 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:17.573651 kubelet[1884]: E0209 13:16:17.573588 1884 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:17.573839 kubelet[1884]: I0209 13:16:17.573830 1884 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 13:16:17.573953 kubelet[1884]: W0209 13:16:17.573947 1884 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 13:16:17.574146 kubelet[1884]: I0209 13:16:17.574139 1884 server.go:1186] "Started kubelet" Feb 9 13:16:17.574236 kubelet[1884]: I0209 13:16:17.574226 1884 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 13:16:17.574306 kubelet[1884]: E0209 13:16:17.574297 1884 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 13:16:17.574335 kubelet[1884]: E0209 13:16:17.574311 1884 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 13:16:17.574000 audit[1884]: AVC avc: denied { mac_admin } for pid=1884 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.574630 kubelet[1884]: I0209 13:16:17.574599 1884 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 13:16:17.574630 kubelet[1884]: I0209 13:16:17.574617 1884 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 13:16:17.574692 kubelet[1884]: I0209 13:16:17.574644 1884 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 13:16:17.574692 kubelet[1884]: I0209 13:16:17.574688 1884 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 13:16:17.574745 kubelet[1884]: E0209 13:16:17.574698 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:17.574745 kubelet[1884]: I0209 13:16:17.574719 1884 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 13:16:17.574798 kubelet[1884]: I0209 13:16:17.574772 1884 server.go:451] "Adding debug handlers to kubelet server" Feb 9 13:16:17.580240 kernel: kauditd_printk_skb: 361 callbacks suppressed Feb 9 13:16:17.580311 kernel: audit: type=1400 audit(1707484577.574:525): avc: denied { mac_admin } for pid=1884 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.582531 kubelet[1884]: W0209 13:16:17.582520 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 13:16:17.582646 kubelet[1884]: E0209 13:16:17.582536 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 13:16:17.582779 kubelet[1884]: E0209 13:16:17.582745 1884 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.67.80.7" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 13:16:17.583242 kubelet[1884]: E0209 13:16:17.583097 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d39c38792", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 574127506, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 574127506, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.583242 kubelet[1884]: W0209 13:16:17.583165 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.7" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 13:16:17.583242 kubelet[1884]: E0209 13:16:17.583216 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.7" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 13:16:17.583417 kubelet[1884]: W0209 13:16:17.583257 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 13:16:17.583417 kubelet[1884]: E0209 13:16:17.583284 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 13:16:17.587177 kubelet[1884]: E0209 13:16:17.587125 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d39c63935", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 574304053, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 574304053, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.594872 kubelet[1884]: I0209 13:16:17.594838 1884 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 13:16:17.594872 kubelet[1884]: I0209 13:16:17.594846 1884 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 13:16:17.594872 kubelet[1884]: I0209 13:16:17.594872 1884 state_mem.go:36] "Initialized new in-memory state store" Feb 9 13:16:17.595476 kubelet[1884]: E0209 13:16:17.595444 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa5bfd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594498045, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594498045, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.595710 kubelet[1884]: I0209 13:16:17.595674 1884 policy_none.go:49] "None policy: Start" Feb 9 13:16:17.595903 kubelet[1884]: I0209 13:16:17.595897 1884 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 13:16:17.595931 kubelet[1884]: I0209 13:16:17.595907 1884 state_mem.go:35] "Initializing new in-memory state store" Feb 9 13:16:17.596257 kubelet[1884]: E0209 13:16:17.596222 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa695c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594501468, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594501468, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.596978 kubelet[1884]: E0209 13:16:17.596949 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa6e59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.7 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594502745, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594502745, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.598182 systemd[1]: Created slice kubepods.slice. Feb 9 13:16:17.600102 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 13:16:17.574000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 13:16:17.613998 kernel: audit: type=1401 audit(1707484577.574:525): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 13:16:17.614111 kernel: audit: type=1300 audit(1707484577.574:525): arch=c000003e syscall=188 success=no exit=-22 a0=c0010fcc60 a1=c000fb01e0 a2=c0010fcc30 a3=25 items=0 ppid=1 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:17.574000 audit[1884]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0010fcc60 a1=c000fb01e0 a2=c0010fcc30 a3=25 items=0 ppid=1 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:17.675589 kubelet[1884]: I0209 13:16:17.675575 1884 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.7" Feb 9 13:16:17.677069 kubelet[1884]: E0209 13:16:17.677034 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa5bfd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594498045, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 675553581, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa5bfd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.677172 kubelet[1884]: E0209 13:16:17.677094 1884 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.7" Feb 9 13:16:17.678105 kubelet[1884]: E0209 13:16:17.678071 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa695c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594501468, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 675557897, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa695c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.679154 kubelet[1884]: E0209 13:16:17.679118 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa6e59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.7 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594502745, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 675561486, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa6e59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.574000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 13:16:17.761796 kernel: audit: type=1327 audit(1707484577.574:525): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 13:16:17.761839 kernel: audit: type=1400 audit(1707484577.574:526): avc: denied { mac_admin } for pid=1884 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.574000 audit[1884]: AVC avc: denied { mac_admin } for pid=1884 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:17.784574 kubelet[1884]: E0209 13:16:17.784560 1884 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.67.80.7" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 13:16:17.574000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 13:16:17.849365 kernel: audit: type=1401 audit(1707484577.574:526): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 13:16:17.849401 kernel: audit: type=1300 audit(1707484577.574:526): arch=c000003e syscall=188 success=no exit=-22 a0=c000089f20 a1=c000fb01f8 a2=c0010fccf0 a3=25 items=0 ppid=1 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:17.574000 audit[1884]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000089f20 a1=c000fb01f8 a2=c0010fccf0 a3=25 items=0 ppid=1 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:17.878086 kubelet[1884]: I0209 13:16:17.878075 1884 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.7" Feb 9 13:16:17.879482 kubelet[1884]: E0209 13:16:17.879448 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa5bfd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594498045, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 878060924, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa5bfd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.879679 kubelet[1884]: E0209 13:16:17.879669 1884 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.7" Feb 9 13:16:17.880714 kubelet[1884]: E0209 13:16:17.880683 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa695c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594501468, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 878063041, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa695c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.882299 kubelet[1884]: E0209 13:16:17.882263 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa6e59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.7 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594502745, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 878064256, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa6e59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:17.574000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 13:16:18.028857 kernel: audit: type=1327 audit(1707484577.574:526): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 13:16:18.028000 audit[1910]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.086252 kernel: audit: type=1325 audit(1707484578.028:527): table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.086295 kernel: audit: type=1300 audit(1707484578.028:527): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd37028f50 a2=0 a3=7ffd37028f3c items=0 ppid=1884 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.028000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd37028f50 a2=0 a3=7ffd37028f3c items=0 ppid=1884 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.028000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 13:16:18.029000 audit[1912]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.029000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc7b19fa40 a2=0 a3=7ffc7b19fa2c items=0 ppid=1884 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 13:16:18.186194 kubelet[1884]: E0209 13:16:18.186156 1884 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.67.80.7" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 13:16:18.186194 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 13:16:18.186763 kubelet[1884]: I0209 13:16:18.186727 1884 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 13:16:18.186763 kubelet[1884]: I0209 13:16:18.186753 1884 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 13:16:18.185000 audit[1884]: AVC avc: denied { mac_admin } for pid=1884 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:18.185000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 13:16:18.185000 audit[1884]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0002db200 a1=c0010da630 a2=c0002db170 a3=25 items=0 ppid=1 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.185000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 13:16:18.186913 kubelet[1884]: I0209 13:16:18.186858 1884 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 13:16:18.187106 kubelet[1884]: E0209 13:16:18.187075 1884 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.7\" not found" Feb 9 13:16:18.201337 kubelet[1884]: E0209 13:16:18.201276 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d5f114fa8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 18, 199981992, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 18, 199981992, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:18.030000 audit[1914]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.030000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffec106a260 a2=0 a3=7ffec106a24c items=0 ppid=1884 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 13:16:18.226000 audit[1919]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.226000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd45a30820 a2=0 a3=7ffd45a3080c items=0 ppid=1884 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 13:16:18.281115 kubelet[1884]: I0209 13:16:18.281096 1884 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.7" Feb 9 13:16:18.281997 kubelet[1884]: E0209 13:16:18.281957 1884 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.7" Feb 9 13:16:18.282194 kubelet[1884]: E0209 13:16:18.282119 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa5bfd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594498045, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 18, 281053225, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa5bfd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:18.293000 audit[1924]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.293000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc363d50d0 a2=0 a3=7ffc363d50bc items=0 ppid=1884 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 13:16:18.294000 audit[1925]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.294000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe6990e290 a2=0 a3=7ffe6990e27c items=0 ppid=1884 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.294000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 13:16:18.297000 audit[1928]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.297000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcaa8fe7a0 a2=0 a3=7ffcaa8fe78c items=0 ppid=1884 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 13:16:18.299000 audit[1931]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.299000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe9c185880 a2=0 a3=7ffe9c18586c items=0 ppid=1884 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.299000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 13:16:18.300000 audit[1932]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.300000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdefacb5d0 a2=0 a3=7ffdefacb5bc items=0 ppid=1884 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 13:16:18.300000 audit[1933]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.300000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffee1b3d60 a2=0 a3=7fffee1b3d4c items=0 ppid=1884 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 13:16:18.302000 audit[1935]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.302000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffcb585f10 a2=0 a3=7fffcb585efc items=0 ppid=1884 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.302000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 13:16:18.303000 audit[1937]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.303000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe8da008e0 a2=0 a3=7ffe8da008cc items=0 ppid=1884 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 13:16:18.322000 audit[1940]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.322000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc2ed09c80 a2=0 a3=7ffc2ed09c6c items=0 ppid=1884 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.322000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 13:16:18.323000 audit[1942]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.323000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd785fd840 a2=0 a3=7ffd785fd82c items=0 ppid=1884 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 13:16:18.327000 audit[1945]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.327000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe2a711ee0 a2=0 a3=7ffe2a711ecc items=0 ppid=1884 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.327000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 13:16:18.329652 kubelet[1884]: I0209 13:16:18.329646 1884 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 13:16:18.328000 audit[1946]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.328000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc22748460 a2=0 a3=7ffc2274844c items=0 ppid=1884 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 13:16:18.328000 audit[1947]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.328000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc44179350 a2=0 a3=7ffc4417933c items=0 ppid=1884 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 13:16:18.328000 audit[1948]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.328000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc9d3011d0 a2=0 a3=7ffc9d3011bc items=0 ppid=1884 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 13:16:18.329000 audit[1949]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.329000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd46d116a0 a2=0 a3=7ffd46d1168c items=0 ppid=1884 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 13:16:18.329000 audit[1951]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:18.329000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1ace54d0 a2=0 a3=7ffc1ace54bc items=0 ppid=1884 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 13:16:18.330000 audit[1952]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.330000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc8f788cb0 a2=0 a3=7ffc8f788c9c items=0 ppid=1884 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 13:16:18.330000 audit[1953]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.330000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff49522f30 a2=0 a3=7fff49522f1c items=0 ppid=1884 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 13:16:18.331000 audit[1955]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.331000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffcdf09c560 a2=0 a3=7ffcdf09c54c items=0 ppid=1884 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 13:16:18.332000 audit[1956]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.332000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd6d96e0e0 a2=0 a3=7ffd6d96e0cc items=0 ppid=1884 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 13:16:18.332000 audit[1957]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.332000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdbe46f60 a2=0 a3=7ffcdbe46f4c items=0 ppid=1884 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 13:16:18.333000 audit[1959]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.333000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffcb75ee00 a2=0 a3=7fffcb75edec items=0 ppid=1884 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 13:16:18.334000 audit[1961]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.334000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd2fa9c310 a2=0 a3=7ffd2fa9c2fc items=0 ppid=1884 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.334000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 13:16:18.336000 audit[1963]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.336000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffca2281b50 a2=0 a3=7ffca2281b3c items=0 ppid=1884 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 13:16:18.337000 audit[1965]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.337000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffef41a2830 a2=0 a3=7ffef41a281c items=0 ppid=1884 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 13:16:18.339000 audit[1967]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.339000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffe0a9bda00 a2=0 a3=7ffe0a9bd9ec items=0 ppid=1884 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 13:16:18.341032 kubelet[1884]: I0209 13:16:18.341001 1884 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 13:16:18.341032 kubelet[1884]: I0209 13:16:18.341012 1884 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 13:16:18.341032 kubelet[1884]: I0209 13:16:18.341025 1884 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 13:16:18.341094 kubelet[1884]: E0209 13:16:18.341059 1884 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 13:16:18.339000 audit[1968]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.339000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe17036400 a2=0 a3=7ffe170363ec items=0 ppid=1884 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 13:16:18.340000 audit[1969]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.340000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffda4af670 a2=0 a3=7fffda4af65c items=0 ppid=1884 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 13:16:18.342644 kubelet[1884]: W0209 13:16:18.342611 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 13:16:18.342644 kubelet[1884]: E0209 13:16:18.342626 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 13:16:18.341000 audit[1970]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:18.341000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf8a90480 a2=0 a3=7ffcf8a9046c items=0 ppid=1884 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:18.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 13:16:18.377593 kubelet[1884]: E0209 13:16:18.377437 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa695c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594501468, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 18, 281059548, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa695c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:18.574642 kubelet[1884]: E0209 13:16:18.574516 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:18.577219 kubelet[1884]: E0209 13:16:18.576957 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa6e59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.7 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594502745, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 18, 281063868, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa6e59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:18.703158 kubelet[1884]: W0209 13:16:18.702950 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 13:16:18.703158 kubelet[1884]: E0209 13:16:18.703030 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 13:16:18.757834 kubelet[1884]: W0209 13:16:18.757740 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.7" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 13:16:18.757834 kubelet[1884]: E0209 13:16:18.757799 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.7" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 13:16:18.989270 kubelet[1884]: E0209 13:16:18.989059 1884 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.67.80.7" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 13:16:19.072865 kubelet[1884]: W0209 13:16:19.072769 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 13:16:19.072865 kubelet[1884]: E0209 13:16:19.072841 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 13:16:19.083302 kubelet[1884]: I0209 13:16:19.083218 1884 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.7" Feb 9 13:16:19.085175 kubelet[1884]: E0209 13:16:19.085097 1884 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.7" Feb 9 13:16:19.085459 kubelet[1884]: E0209 13:16:19.085263 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa5bfd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594498045, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 19, 83140855, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa5bfd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:19.087314 kubelet[1884]: E0209 13:16:19.087129 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa695c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594501468, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 19, 83158570, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa695c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:19.176405 kubelet[1884]: E0209 13:16:19.176148 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa6e59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.7 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594502745, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 19, 83164701, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa6e59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:19.575270 kubelet[1884]: E0209 13:16:19.575153 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:19.767742 kubelet[1884]: W0209 13:16:19.767624 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 13:16:19.767742 kubelet[1884]: E0209 13:16:19.767697 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 13:16:20.575442 kubelet[1884]: E0209 13:16:20.575340 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:20.591537 kubelet[1884]: E0209 13:16:20.591421 1884 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.67.80.7" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 13:16:20.686976 kubelet[1884]: I0209 13:16:20.686881 1884 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.7" Feb 9 13:16:20.688819 kubelet[1884]: E0209 13:16:20.688706 1884 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.7" Feb 9 13:16:20.688819 kubelet[1884]: E0209 13:16:20.688676 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa5bfd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594498045, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 20, 686782925, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa5bfd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:20.690257 kubelet[1884]: E0209 13:16:20.690071 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa695c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594501468, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 20, 686806647, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa695c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:20.691643 kubelet[1884]: E0209 13:16:20.691426 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa6e59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.7 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594502745, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 20, 686818184, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa6e59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:20.809818 kubelet[1884]: W0209 13:16:20.809708 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.7" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 13:16:20.809818 kubelet[1884]: E0209 13:16:20.809777 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.7" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 13:16:20.956228 kubelet[1884]: W0209 13:16:20.955999 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 13:16:20.956228 kubelet[1884]: E0209 13:16:20.956067 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 13:16:21.576068 kubelet[1884]: E0209 13:16:21.575948 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:21.828391 kubelet[1884]: W0209 13:16:21.828190 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 13:16:21.828391 kubelet[1884]: E0209 13:16:21.828253 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 13:16:22.275145 kubelet[1884]: W0209 13:16:22.274935 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 13:16:22.275145 kubelet[1884]: E0209 13:16:22.275002 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 13:16:22.577106 kubelet[1884]: E0209 13:16:22.577006 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:23.577880 kubelet[1884]: E0209 13:16:23.577777 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:23.794279 kubelet[1884]: E0209 13:16:23.794184 1884 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.67.80.7" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 13:16:23.890892 kubelet[1884]: I0209 13:16:23.890689 1884 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.7" Feb 9 13:16:23.893129 kubelet[1884]: E0209 13:16:23.893046 1884 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.7" Feb 9 13:16:23.893129 kubelet[1884]: E0209 13:16:23.892994 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa5bfd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.7 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594498045, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 23, 890544636, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa5bfd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:23.895430 kubelet[1884]: E0209 13:16:23.895199 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa695c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.7 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594501468, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 23, 890624992, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa695c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:23.897564 kubelet[1884]: E0209 13:16:23.897338 1884 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7.17b2342d3afa6e59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.7", UID:"10.67.80.7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.7 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.7"}, FirstTimestamp:time.Date(2024, time.February, 9, 13, 16, 17, 594502745, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 13, 16, 23, 890632588, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.7.17b2342d3afa6e59" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 13:16:24.210872 kubelet[1884]: W0209 13:16:24.210649 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 13:16:24.210872 kubelet[1884]: E0209 13:16:24.210727 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 13:16:24.578093 kubelet[1884]: E0209 13:16:24.577996 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:25.168477 kubelet[1884]: W0209 13:16:25.168363 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 13:16:25.168477 kubelet[1884]: E0209 13:16:25.168437 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 13:16:25.579136 kubelet[1884]: E0209 13:16:25.579029 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:25.980159 kubelet[1884]: W0209 13:16:25.979950 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 13:16:25.980159 kubelet[1884]: E0209 13:16:25.980026 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 13:16:26.563412 kubelet[1884]: W0209 13:16:26.563302 1884 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.7" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 13:16:26.563412 kubelet[1884]: E0209 13:16:26.563369 1884 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.7" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 13:16:26.580313 kubelet[1884]: E0209 13:16:26.580200 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:27.555132 kubelet[1884]: I0209 13:16:27.555019 1884 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 13:16:27.581585 kubelet[1884]: E0209 13:16:27.581453 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:27.963729 kubelet[1884]: E0209 13:16:27.963621 1884 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.80.7" not found Feb 9 13:16:28.188042 kubelet[1884]: E0209 13:16:28.187946 1884 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.7\" not found" Feb 9 13:16:28.582779 kubelet[1884]: E0209 13:16:28.582659 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:28.989715 kubelet[1884]: E0209 13:16:28.989514 1884 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.80.7" not found Feb 9 13:16:29.583306 kubelet[1884]: E0209 13:16:29.583186 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:30.205602 kubelet[1884]: E0209 13:16:30.205507 1884 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.7\" not found" node="10.67.80.7" Feb 9 13:16:30.295735 kubelet[1884]: I0209 13:16:30.295643 1884 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.7" Feb 9 13:16:30.391579 kubelet[1884]: I0209 13:16:30.391459 1884 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.7" Feb 9 13:16:30.402009 kubelet[1884]: E0209 13:16:30.401920 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:30.502456 kubelet[1884]: E0209 13:16:30.502233 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:30.583707 kubelet[1884]: E0209 13:16:30.583604 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:30.603199 kubelet[1884]: E0209 13:16:30.603094 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:30.704218 kubelet[1884]: E0209 13:16:30.704100 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:30.805277 kubelet[1884]: E0209 13:16:30.805095 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:30.863400 sudo[1634]: pam_unix(sudo:session): session closed for user root Feb 9 13:16:30.862000 audit[1634]: USER_END pid=1634 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:30.866301 sshd[1631]: pam_unix(sshd:session): session closed for user core Feb 9 13:16:30.870026 systemd[1]: sshd@8-86.109.11.101:22-147.75.109.163:52650.service: Deactivated successfully. Feb 9 13:16:30.870634 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 13:16:30.871148 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. Feb 9 13:16:30.871731 systemd-logind[1461]: Removed session 11. Feb 9 13:16:30.890299 kernel: kauditd_printk_skb: 101 callbacks suppressed Feb 9 13:16:30.890331 kernel: audit: type=1106 audit(1707484590.862:561): pid=1634 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:30.905497 kubelet[1884]: E0209 13:16:30.905457 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:30.863000 audit[1634]: CRED_DISP pid=1634 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:31.006393 kubelet[1884]: E0209 13:16:31.006358 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:31.066879 kernel: audit: type=1104 audit(1707484590.863:562): pid=1634 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 13:16:31.066909 kernel: audit: type=1106 audit(1707484590.867:563): pid=1631 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:30.867000 audit[1631]: USER_END pid=1631 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:31.107269 kubelet[1884]: E0209 13:16:31.107221 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:31.161530 kernel: audit: type=1104 audit(1707484590.867:564): pid=1631 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:30.867000 audit[1631]: CRED_DISP pid=1631 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 13:16:31.208180 kubelet[1884]: E0209 13:16:31.208144 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:31.249957 kernel: audit: type=1131 audit(1707484590.868:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-86.109.11.101:22-147.75.109.163:52650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:30.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-86.109.11.101:22-147.75.109.163:52650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:16:31.308356 kubelet[1884]: E0209 13:16:31.308313 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:31.409345 kubelet[1884]: E0209 13:16:31.409239 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:31.510058 kubelet[1884]: E0209 13:16:31.509957 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:31.584227 kubelet[1884]: E0209 13:16:31.584102 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:31.610955 kubelet[1884]: E0209 13:16:31.610836 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:31.711585 kubelet[1884]: E0209 13:16:31.711330 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:31.812277 kubelet[1884]: E0209 13:16:31.812168 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:31.912494 kubelet[1884]: E0209 13:16:31.912394 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.013427 kubelet[1884]: E0209 13:16:32.013194 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.114189 kubelet[1884]: E0209 13:16:32.114071 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.214566 kubelet[1884]: E0209 13:16:32.214425 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.315289 kubelet[1884]: E0209 13:16:32.315073 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.415852 kubelet[1884]: E0209 13:16:32.415748 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.516424 kubelet[1884]: E0209 13:16:32.516303 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.585253 kubelet[1884]: E0209 13:16:32.585131 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:32.616710 kubelet[1884]: E0209 13:16:32.616622 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.717656 kubelet[1884]: E0209 13:16:32.717525 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.818412 kubelet[1884]: E0209 13:16:32.818297 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:32.919294 kubelet[1884]: E0209 13:16:32.919075 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.020001 kubelet[1884]: E0209 13:16:33.019883 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.121087 kubelet[1884]: E0209 13:16:33.120980 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.221598 kubelet[1884]: E0209 13:16:33.221348 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.321840 kubelet[1884]: E0209 13:16:33.321736 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.422035 kubelet[1884]: E0209 13:16:33.421928 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.522420 kubelet[1884]: E0209 13:16:33.522211 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.586479 kubelet[1884]: E0209 13:16:33.586378 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:33.623055 kubelet[1884]: E0209 13:16:33.622939 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.724313 kubelet[1884]: E0209 13:16:33.724192 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.824946 kubelet[1884]: E0209 13:16:33.824855 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:33.925701 kubelet[1884]: E0209 13:16:33.925576 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.026086 kubelet[1884]: E0209 13:16:34.025977 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.127347 kubelet[1884]: E0209 13:16:34.127117 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.227401 kubelet[1884]: E0209 13:16:34.227283 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.327776 kubelet[1884]: E0209 13:16:34.327664 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.428455 kubelet[1884]: E0209 13:16:34.428230 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.528795 kubelet[1884]: E0209 13:16:34.528693 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.586920 kubelet[1884]: E0209 13:16:34.586820 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:34.629622 kubelet[1884]: E0209 13:16:34.629505 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.730704 kubelet[1884]: E0209 13:16:34.730428 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.831542 kubelet[1884]: E0209 13:16:34.831423 1884 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.7\" not found" Feb 9 13:16:34.933726 kubelet[1884]: I0209 13:16:34.933615 1884 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 13:16:34.934437 env[1471]: time="2024-02-09T13:16:34.934316123Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 13:16:34.935247 kubelet[1884]: I0209 13:16:34.934769 1884 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 13:16:35.583055 kubelet[1884]: I0209 13:16:35.582941 1884 apiserver.go:52] "Watching apiserver" Feb 9 13:16:35.587027 kubelet[1884]: E0209 13:16:35.586935 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:35.588452 kubelet[1884]: I0209 13:16:35.588358 1884 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:16:35.588682 kubelet[1884]: I0209 13:16:35.588525 1884 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:16:35.588819 kubelet[1884]: I0209 13:16:35.588724 1884 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:16:35.589043 kubelet[1884]: E0209 13:16:35.588947 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:35.601798 systemd[1]: Created slice kubepods-besteffort-podb6771ff4_3e00_498c_90a8_b075a9b2e54f.slice. Feb 9 13:16:35.620474 systemd[1]: Created slice kubepods-besteffort-pod711b2f4e_ea1d_4869_a390_700ff55ad1c1.slice. Feb 9 13:16:35.677300 kubelet[1884]: I0209 13:16:35.677202 1884 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 13:16:35.774201 kubelet[1884]: I0209 13:16:35.774094 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4f51fc53-a7af-4e05-9116-86df85873e6c-registration-dir\") pod \"csi-node-driver-72bhh\" (UID: \"4f51fc53-a7af-4e05-9116-86df85873e6c\") " pod="calico-system/csi-node-driver-72bhh" Feb 9 13:16:35.774201 kubelet[1884]: I0209 13:16:35.774201 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/711b2f4e-ea1d-4869-a390-700ff55ad1c1-xtables-lock\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.774636 kubelet[1884]: I0209 13:16:35.774372 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/711b2f4e-ea1d-4869-a390-700ff55ad1c1-node-certs\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.774636 kubelet[1884]: I0209 13:16:35.774576 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/711b2f4e-ea1d-4869-a390-700ff55ad1c1-var-run-calico\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.774879 kubelet[1884]: I0209 13:16:35.774754 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/711b2f4e-ea1d-4869-a390-700ff55ad1c1-cni-net-dir\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.774989 kubelet[1884]: I0209 13:16:35.774885 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/711b2f4e-ea1d-4869-a390-700ff55ad1c1-flexvol-driver-host\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.774989 kubelet[1884]: I0209 13:16:35.774961 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4f51fc53-a7af-4e05-9116-86df85873e6c-varrun\") pod \"csi-node-driver-72bhh\" (UID: \"4f51fc53-a7af-4e05-9116-86df85873e6c\") " pod="calico-system/csi-node-driver-72bhh" Feb 9 13:16:35.775253 kubelet[1884]: I0209 13:16:35.775196 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4f51fc53-a7af-4e05-9116-86df85873e6c-socket-dir\") pod \"csi-node-driver-72bhh\" (UID: \"4f51fc53-a7af-4e05-9116-86df85873e6c\") " pod="calico-system/csi-node-driver-72bhh" Feb 9 13:16:35.775404 kubelet[1884]: I0209 13:16:35.775327 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v52nb\" (UniqueName: \"kubernetes.io/projected/b6771ff4-3e00-498c-90a8-b075a9b2e54f-kube-api-access-v52nb\") pod \"kube-proxy-fplv9\" (UID: \"b6771ff4-3e00-498c-90a8-b075a9b2e54f\") " pod="kube-system/kube-proxy-fplv9" Feb 9 13:16:35.775512 kubelet[1884]: I0209 13:16:35.775434 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7swj\" (UniqueName: \"kubernetes.io/projected/711b2f4e-ea1d-4869-a390-700ff55ad1c1-kube-api-access-g7swj\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.775630 kubelet[1884]: I0209 13:16:35.775563 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6771ff4-3e00-498c-90a8-b075a9b2e54f-xtables-lock\") pod \"kube-proxy-fplv9\" (UID: \"b6771ff4-3e00-498c-90a8-b075a9b2e54f\") " pod="kube-system/kube-proxy-fplv9" Feb 9 13:16:35.775752 kubelet[1884]: I0209 13:16:35.775681 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f51fc53-a7af-4e05-9116-86df85873e6c-kubelet-dir\") pod \"csi-node-driver-72bhh\" (UID: \"4f51fc53-a7af-4e05-9116-86df85873e6c\") " pod="calico-system/csi-node-driver-72bhh" Feb 9 13:16:35.775868 kubelet[1884]: I0209 13:16:35.775777 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b6771ff4-3e00-498c-90a8-b075a9b2e54f-kube-proxy\") pod \"kube-proxy-fplv9\" (UID: \"b6771ff4-3e00-498c-90a8-b075a9b2e54f\") " pod="kube-system/kube-proxy-fplv9" Feb 9 13:16:35.775979 kubelet[1884]: I0209 13:16:35.775887 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6771ff4-3e00-498c-90a8-b075a9b2e54f-lib-modules\") pod \"kube-proxy-fplv9\" (UID: \"b6771ff4-3e00-498c-90a8-b075a9b2e54f\") " pod="kube-system/kube-proxy-fplv9" Feb 9 13:16:35.776085 kubelet[1884]: I0209 13:16:35.775988 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhn6c\" (UniqueName: \"kubernetes.io/projected/4f51fc53-a7af-4e05-9116-86df85873e6c-kube-api-access-mhn6c\") pod \"csi-node-driver-72bhh\" (UID: \"4f51fc53-a7af-4e05-9116-86df85873e6c\") " pod="calico-system/csi-node-driver-72bhh" Feb 9 13:16:35.776186 kubelet[1884]: I0209 13:16:35.776102 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/711b2f4e-ea1d-4869-a390-700ff55ad1c1-policysync\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.776295 kubelet[1884]: I0209 13:16:35.776193 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/711b2f4e-ea1d-4869-a390-700ff55ad1c1-var-lib-calico\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.776399 kubelet[1884]: I0209 13:16:35.776310 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/711b2f4e-ea1d-4869-a390-700ff55ad1c1-cni-bin-dir\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.776631 kubelet[1884]: I0209 13:16:35.776423 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/711b2f4e-ea1d-4869-a390-700ff55ad1c1-cni-log-dir\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.776631 kubelet[1884]: I0209 13:16:35.776520 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/711b2f4e-ea1d-4869-a390-700ff55ad1c1-lib-modules\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.776631 kubelet[1884]: I0209 13:16:35.776614 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711b2f4e-ea1d-4869-a390-700ff55ad1c1-tigera-ca-bundle\") pod \"calico-node-z64hk\" (UID: \"711b2f4e-ea1d-4869-a390-700ff55ad1c1\") " pod="calico-system/calico-node-z64hk" Feb 9 13:16:35.776937 kubelet[1884]: I0209 13:16:35.776666 1884 reconciler.go:41] "Reconciler: start to sync state" Feb 9 13:16:35.880386 kubelet[1884]: E0209 13:16:35.880233 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:35.880386 kubelet[1884]: W0209 13:16:35.880275 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:35.880386 kubelet[1884]: E0209 13:16:35.880348 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:35.881100 kubelet[1884]: E0209 13:16:35.881022 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:35.881100 kubelet[1884]: W0209 13:16:35.881061 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:35.881100 kubelet[1884]: E0209 13:16:35.881103 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:35.885532 kubelet[1884]: E0209 13:16:35.885525 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:35.885532 kubelet[1884]: W0209 13:16:35.885530 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:35.885642 kubelet[1884]: E0209 13:16:35.885537 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:35.980278 kubelet[1884]: E0209 13:16:35.980162 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:35.980278 kubelet[1884]: W0209 13:16:35.980203 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:35.980278 kubelet[1884]: E0209 13:16:35.980249 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:35.980943 kubelet[1884]: E0209 13:16:35.980852 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:35.980943 kubelet[1884]: W0209 13:16:35.980887 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:35.980943 kubelet[1884]: E0209 13:16:35.980928 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:35.981523 kubelet[1884]: E0209 13:16:35.981488 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:35.981523 kubelet[1884]: W0209 13:16:35.981522 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:35.981778 kubelet[1884]: E0209 13:16:35.981579 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.082469 kubelet[1884]: E0209 13:16:36.082412 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.082469 kubelet[1884]: W0209 13:16:36.082455 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.082935 kubelet[1884]: E0209 13:16:36.082504 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.083158 kubelet[1884]: E0209 13:16:36.083123 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.083271 kubelet[1884]: W0209 13:16:36.083158 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.083271 kubelet[1884]: E0209 13:16:36.083198 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.083800 kubelet[1884]: E0209 13:16:36.083766 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.083913 kubelet[1884]: W0209 13:16:36.083800 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.083913 kubelet[1884]: E0209 13:16:36.083840 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.185054 kubelet[1884]: E0209 13:16:36.184853 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.185054 kubelet[1884]: W0209 13:16:36.184895 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.185054 kubelet[1884]: E0209 13:16:36.184941 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.185584 kubelet[1884]: E0209 13:16:36.185526 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.185728 kubelet[1884]: W0209 13:16:36.185587 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.185728 kubelet[1884]: E0209 13:16:36.185631 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.186259 kubelet[1884]: E0209 13:16:36.186187 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.186259 kubelet[1884]: W0209 13:16:36.186225 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.186568 kubelet[1884]: E0209 13:16:36.186275 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.196651 kubelet[1884]: E0209 13:16:36.196586 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.196651 kubelet[1884]: W0209 13:16:36.196635 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.196976 kubelet[1884]: E0209 13:16:36.196684 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.288335 kubelet[1884]: E0209 13:16:36.288240 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.288335 kubelet[1884]: W0209 13:16:36.288284 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.288335 kubelet[1884]: E0209 13:16:36.288329 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.289031 kubelet[1884]: E0209 13:16:36.288953 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.289031 kubelet[1884]: W0209 13:16:36.288992 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.289031 kubelet[1884]: E0209 13:16:36.289037 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.390306 kubelet[1884]: E0209 13:16:36.390207 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.390306 kubelet[1884]: W0209 13:16:36.390249 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.390306 kubelet[1884]: E0209 13:16:36.390300 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.390965 kubelet[1884]: E0209 13:16:36.390886 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.390965 kubelet[1884]: W0209 13:16:36.390919 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.390965 kubelet[1884]: E0209 13:16:36.390959 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.395488 kubelet[1884]: E0209 13:16:36.395452 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.395488 kubelet[1884]: W0209 13:16:36.395460 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.395488 kubelet[1884]: E0209 13:16:36.395471 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.492266 kubelet[1884]: E0209 13:16:36.492052 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.492266 kubelet[1884]: W0209 13:16:36.492094 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.492266 kubelet[1884]: E0209 13:16:36.492140 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.525175 env[1471]: time="2024-02-09T13:16:36.524651995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z64hk,Uid:711b2f4e-ea1d-4869-a390-700ff55ad1c1,Namespace:calico-system,Attempt:0,}" Feb 9 13:16:36.587794 kubelet[1884]: E0209 13:16:36.587671 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:36.593481 kubelet[1884]: E0209 13:16:36.593406 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.593481 kubelet[1884]: W0209 13:16:36.593442 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.593481 kubelet[1884]: E0209 13:16:36.593489 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.596441 kubelet[1884]: E0209 13:16:36.596359 1884 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 13:16:36.596441 kubelet[1884]: W0209 13:16:36.596394 1884 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 13:16:36.596441 kubelet[1884]: E0209 13:16:36.596434 1884 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 13:16:36.821652 env[1471]: time="2024-02-09T13:16:36.821501457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fplv9,Uid:b6771ff4-3e00-498c-90a8-b075a9b2e54f,Namespace:kube-system,Attempt:0,}" Feb 9 13:16:37.318418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492388892.mount: Deactivated successfully. Feb 9 13:16:37.320093 env[1471]: time="2024-02-09T13:16:37.320045214Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:37.321089 env[1471]: time="2024-02-09T13:16:37.321049425Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:37.321787 env[1471]: time="2024-02-09T13:16:37.321754542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:37.322183 env[1471]: time="2024-02-09T13:16:37.322131854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:37.323034 env[1471]: time="2024-02-09T13:16:37.322994055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:37.323685 env[1471]: time="2024-02-09T13:16:37.323476067Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:37.325322 env[1471]: time="2024-02-09T13:16:37.325286475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:37.326537 env[1471]: time="2024-02-09T13:16:37.326499316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:37.332416 env[1471]: time="2024-02-09T13:16:37.332389239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:16:37.332416 env[1471]: time="2024-02-09T13:16:37.332409108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:16:37.332490 env[1471]: time="2024-02-09T13:16:37.332416660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:16:37.332490 env[1471]: time="2024-02-09T13:16:37.332476111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3e5489030d3c99d7ebaec5c2f7a18712893215f6b1170900b29442b6ef33684 pid=2015 runtime=io.containerd.runc.v2 Feb 9 13:16:37.332851 env[1471]: time="2024-02-09T13:16:37.332804497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:16:37.332851 env[1471]: time="2024-02-09T13:16:37.332818805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:16:37.332851 env[1471]: time="2024-02-09T13:16:37.332824997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:16:37.332915 env[1471]: time="2024-02-09T13:16:37.332877589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6443754b2a10868597bce1de9e92586c81c7c2de13db0072c4dbfe1ee737995b pid=2019 runtime=io.containerd.runc.v2 Feb 9 13:16:37.338217 systemd[1]: Started cri-containerd-6443754b2a10868597bce1de9e92586c81c7c2de13db0072c4dbfe1ee737995b.scope. Feb 9 13:16:37.341796 kubelet[1884]: E0209 13:16:37.341782 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:37.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.469871 kernel: audit: type=1400 audit(1707484597.341:566): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.469909 kernel: audit: type=1400 audit(1707484597.341:567): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.469932 kernel: audit: type=1400 audit(1707484597.341:568): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.471080 systemd[1]: Started cri-containerd-c3e5489030d3c99d7ebaec5c2f7a18712893215f6b1170900b29442b6ef33684.scope. Feb 9 13:16:37.532436 kernel: audit: type=1400 audit(1707484597.341:569): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.573608 kubelet[1884]: E0209 13:16:37.573566 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:37.588715 kubelet[1884]: E0209 13:16:37.588671 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:37.595563 kernel: audit: type=1400 audit(1707484597.341:570): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.659093 kernel: audit: type=1400 audit(1707484597.341:571): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.722844 kernel: audit: type=1400 audit(1707484597.341:572): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.786736 kernel: audit: type=1400 audit(1707484597.341:573): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.850789 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Feb 9 13:16:37.850820 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Feb 9 13:16:37.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit: BPF prog-id=61 op=LOAD Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2019 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634343337353462326131303836383539376263653164653965393235 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2019 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634343337353462326131303836383539376263653164653965393235 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.474000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.474000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit: BPF prog-id=62 op=LOAD Feb 9 13:16:37.468000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.468000 audit: BPF prog-id=63 op=LOAD Feb 9 13:16:37.468000 audit[2038]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00029acd0 items=0 ppid=2019 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634343337353462326131303836383539376263653164653965393235 Feb 9 13:16:37.658000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2015 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653534383930333064336339396437656261656335633266376131 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2015 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653534383930333064336339396437656261656335633266376131 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit: BPF prog-id=64 op=LOAD Feb 9 13:16:37.658000 audit[2038]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00029ad18 items=0 ppid=2019 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634343337353462326131303836383539376263653164653965393235 Feb 9 13:16:37.877000 audit: BPF prog-id=64 op=UNLOAD Feb 9 13:16:37.877000 audit: BPF prog-id=63 op=UNLOAD Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { perfmon } for pid=2038 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.658000 audit: BPF prog-id=65 op=LOAD Feb 9 13:16:37.658000 audit[2036]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0002125a0 items=0 ppid=2015 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653534383930333064336339396437656261656335633266376131 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit[2038]: AVC avc: denied { bpf } for pid=2038 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.877000 audit: BPF prog-id=66 op=LOAD Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit: BPF prog-id=67 op=LOAD Feb 9 13:16:37.905000 audit[2036]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0002125e8 items=0 ppid=2015 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.877000 audit[2038]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00029b128 items=0 ppid=2019 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653534383930333064336339396437656261656335633266376131 Feb 9 13:16:37.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634343337353462326131303836383539376263653164653965393235 Feb 9 13:16:37.905000 audit: BPF prog-id=67 op=UNLOAD Feb 9 13:16:37.905000 audit: BPF prog-id=65 op=UNLOAD Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { perfmon } for pid=2036 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit[2036]: AVC avc: denied { bpf } for pid=2036 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:37.905000 audit: BPF prog-id=68 op=LOAD Feb 9 13:16:37.905000 audit[2036]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0002129f8 items=0 ppid=2015 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:37.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333653534383930333064336339396437656261656335633266376131 Feb 9 13:16:37.910936 env[1471]: time="2024-02-09T13:16:37.910908825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z64hk,Uid:711b2f4e-ea1d-4869-a390-700ff55ad1c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3e5489030d3c99d7ebaec5c2f7a18712893215f6b1170900b29442b6ef33684\"" Feb 9 13:16:37.911162 env[1471]: time="2024-02-09T13:16:37.910925587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fplv9,Uid:b6771ff4-3e00-498c-90a8-b075a9b2e54f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6443754b2a10868597bce1de9e92586c81c7c2de13db0072c4dbfe1ee737995b\"" Feb 9 13:16:37.911782 env[1471]: time="2024-02-09T13:16:37.911743978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 13:16:38.589865 kubelet[1884]: E0209 13:16:38.589742 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:39.342662 kubelet[1884]: E0209 13:16:39.342543 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:39.590644 kubelet[1884]: E0209 13:16:39.590519 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:39.825973 update_engine[1463]: I0209 13:16:39.825846 1463 update_attempter.cc:509] Updating boot flags... Feb 9 13:16:40.591484 kubelet[1884]: E0209 13:16:40.591351 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:41.341668 kubelet[1884]: E0209 13:16:41.341567 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:41.400980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1329320360.mount: Deactivated successfully. Feb 9 13:16:41.591891 kubelet[1884]: E0209 13:16:41.591658 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:42.592004 kubelet[1884]: E0209 13:16:42.591901 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:43.342210 kubelet[1884]: E0209 13:16:43.342108 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:43.593011 kubelet[1884]: E0209 13:16:43.592782 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:44.593867 kubelet[1884]: E0209 13:16:44.593755 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:45.342465 kubelet[1884]: E0209 13:16:45.342355 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:45.594665 kubelet[1884]: E0209 13:16:45.594418 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:46.595777 kubelet[1884]: E0209 13:16:46.595698 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:47.197578 env[1471]: time="2024-02-09T13:16:47.197553268Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:47.198227 env[1471]: time="2024-02-09T13:16:47.198204842Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:47.199572 env[1471]: time="2024-02-09T13:16:47.199530101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:47.200435 env[1471]: time="2024-02-09T13:16:47.200417305Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:47.201391 env[1471]: time="2024-02-09T13:16:47.201376368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 13:16:47.201888 env[1471]: time="2024-02-09T13:16:47.201845829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 13:16:47.202403 env[1471]: time="2024-02-09T13:16:47.202388713Z" level=info msg="CreateContainer within sandbox \"c3e5489030d3c99d7ebaec5c2f7a18712893215f6b1170900b29442b6ef33684\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 13:16:47.207653 env[1471]: time="2024-02-09T13:16:47.207558519Z" level=info msg="CreateContainer within sandbox \"c3e5489030d3c99d7ebaec5c2f7a18712893215f6b1170900b29442b6ef33684\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"82783973651a85457ac7f0535444cba939b03af3b05db0ecc3675c86a285da34\"" Feb 9 13:16:47.207875 env[1471]: time="2024-02-09T13:16:47.207862369Z" level=info msg="StartContainer for \"82783973651a85457ac7f0535444cba939b03af3b05db0ecc3675c86a285da34\"" Feb 9 13:16:47.228136 systemd[1]: Started cri-containerd-82783973651a85457ac7f0535444cba939b03af3b05db0ecc3675c86a285da34.scope. Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.262479 kernel: kauditd_printk_skb: 106 callbacks suppressed Feb 9 13:16:47.262540 kernel: audit: type=1400 audit(1707484607.234:602): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=8 items=0 ppid=2015 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:47.341681 kubelet[1884]: E0209 13:16:47.341618 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:47.421526 kernel: audit: type=1300 audit(1707484607.234:602): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=8 items=0 ppid=2015 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:47.421562 kernel: audit: type=1327 audit(1707484607.234:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832373833393733363531613835343537616337663035333534343463 Feb 9 13:16:47.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832373833393733363531613835343537616337663035333534343463 Feb 9 13:16:47.513804 kernel: audit: type=1400 audit(1707484607.234:603): avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.596575 kubelet[1884]: E0209 13:16:47.596534 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:47.639499 kernel: audit: type=1400 audit(1707484607.234:603): avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.639528 kernel: audit: type=1400 audit(1707484607.234:603): avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.702514 kernel: audit: type=1400 audit(1707484607.234:603): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.766614 kernel: audit: type=1400 audit(1707484607.234:603): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.781441 env[1471]: time="2024-02-09T13:16:47.781421746Z" level=info msg="StartContainer for \"82783973651a85457ac7f0535444cba939b03af3b05db0ecc3675c86a285da34\" returns successfully" Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.829581 systemd[1]: cri-containerd-82783973651a85457ac7f0535444cba939b03af3b05db0ecc3675c86a285da34.scope: Deactivated successfully. Feb 9 13:16:47.892744 kernel: audit: type=1400 audit(1707484607.234:603): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.892803 kernel: audit: type=1400 audit(1707484607.234:603): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.234000 audit: BPF prog-id=69 op=LOAD Feb 9 13:16:47.234000 audit[2109]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001479d8 a2=78 a3=c000337c10 items=0 ppid=2015 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:47.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832373833393733363531613835343537616337663035333534343463 Feb 9 13:16:47.325000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.325000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.325000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.325000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.325000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.325000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.325000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.325000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.325000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.325000 audit: BPF prog-id=70 op=LOAD Feb 9 13:16:47.325000 audit[2109]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000147770 a2=78 a3=c000337c58 items=0 ppid=2015 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:47.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832373833393733363531613835343537616337663035333534343463 Feb 9 13:16:47.512000 audit: BPF prog-id=70 op=UNLOAD Feb 9 13:16:47.512000 audit: BPF prog-id=69 op=UNLOAD Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:47.512000 audit: BPF prog-id=71 op=LOAD Feb 9 13:16:47.512000 audit[2109]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000147c30 a2=78 a3=c000337ce8 items=0 ppid=2015 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:47.512000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832373833393733363531613835343537616337663035333534343463 Feb 9 13:16:47.966000 audit: BPF prog-id=71 op=UNLOAD Feb 9 13:16:48.019087 env[1471]: time="2024-02-09T13:16:48.018940692Z" level=info msg="shim disconnected" id=82783973651a85457ac7f0535444cba939b03af3b05db0ecc3675c86a285da34 Feb 9 13:16:48.019087 env[1471]: time="2024-02-09T13:16:48.019003345Z" level=warning msg="cleaning up after shim disconnected" id=82783973651a85457ac7f0535444cba939b03af3b05db0ecc3675c86a285da34 namespace=k8s.io Feb 9 13:16:48.019087 env[1471]: time="2024-02-09T13:16:48.019027738Z" level=info msg="cleaning up dead shim" Feb 9 13:16:48.038883 env[1471]: time="2024-02-09T13:16:48.038818121Z" level=warning msg="cleanup warnings time=\"2024-02-09T13:16:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2148 runtime=io.containerd.runc.v2\n" Feb 9 13:16:48.206454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396450222.mount: Deactivated successfully. Feb 9 13:16:48.206507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82783973651a85457ac7f0535444cba939b03af3b05db0ecc3675c86a285da34-rootfs.mount: Deactivated successfully. Feb 9 13:16:48.439925 env[1471]: time="2024-02-09T13:16:48.439895847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:48.440481 env[1471]: time="2024-02-09T13:16:48.440469610Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:48.441019 env[1471]: time="2024-02-09T13:16:48.441010276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:48.442049 env[1471]: time="2024-02-09T13:16:48.442010264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:16:48.442156 env[1471]: time="2024-02-09T13:16:48.442145268Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 13:16:48.442468 env[1471]: time="2024-02-09T13:16:48.442455311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 13:16:48.443050 env[1471]: time="2024-02-09T13:16:48.443037648Z" level=info msg="CreateContainer within sandbox \"6443754b2a10868597bce1de9e92586c81c7c2de13db0072c4dbfe1ee737995b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 13:16:48.448685 env[1471]: time="2024-02-09T13:16:48.448613784Z" level=info msg="CreateContainer within sandbox \"6443754b2a10868597bce1de9e92586c81c7c2de13db0072c4dbfe1ee737995b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf42c30591c88c9520081a86fba84eced4bdb323400a30ade41217d4e86edce1\"" Feb 9 13:16:48.449009 env[1471]: time="2024-02-09T13:16:48.448996687Z" level=info msg="StartContainer for \"cf42c30591c88c9520081a86fba84eced4bdb323400a30ade41217d4e86edce1\"" Feb 9 13:16:48.459170 systemd[1]: Started cri-containerd-cf42c30591c88c9520081a86fba84eced4bdb323400a30ade41217d4e86edce1.scope. Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2019 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343263333035393163383863393532303038316138366662613834 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit: BPF prog-id=72 op=LOAD Feb 9 13:16:48.466000 audit[2167]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0001f9400 items=0 ppid=2019 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343263333035393163383863393532303038316138366662613834 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit: BPF prog-id=73 op=LOAD Feb 9 13:16:48.466000 audit[2167]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0001f9448 items=0 ppid=2019 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343263333035393163383863393532303038316138366662613834 Feb 9 13:16:48.466000 audit: BPF prog-id=73 op=UNLOAD Feb 9 13:16:48.466000 audit: BPF prog-id=72 op=UNLOAD Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { perfmon } for pid=2167 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit[2167]: AVC avc: denied { bpf } for pid=2167 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:16:48.466000 audit: BPF prog-id=74 op=LOAD Feb 9 13:16:48.466000 audit[2167]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0001f94d8 items=0 ppid=2019 pid=2167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343263333035393163383863393532303038316138366662613834 Feb 9 13:16:48.472778 env[1471]: time="2024-02-09T13:16:48.472755128Z" level=info msg="StartContainer for \"cf42c30591c88c9520081a86fba84eced4bdb323400a30ade41217d4e86edce1\" returns successfully" Feb 9 13:16:48.507000 audit[2226]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.507000 audit[2226]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8ec70270 a2=0 a3=7ffe8ec7025c items=0 ppid=2177 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 13:16:48.507000 audit[2227]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.507000 audit[2227]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6050bab0 a2=0 a3=7ffd6050ba9c items=0 ppid=2177 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 13:16:48.508000 audit[2228]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.508000 audit[2228]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6ef9d270 a2=0 a3=7fff6ef9d25c items=0 ppid=2177 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 13:16:48.508000 audit[2229]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=2229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.508000 audit[2229]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeece485b0 a2=0 a3=7ffeece4859c items=0 ppid=2177 pid=2229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.508000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 13:16:48.509000 audit[2235]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.509000 audit[2235]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6a93c000 a2=0 a3=7ffe6a93bfec items=0 ppid=2177 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 13:16:48.509000 audit[2237]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.509000 audit[2237]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7ba85e00 a2=0 a3=7ffe7ba85dec items=0 ppid=2177 pid=2237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 13:16:48.597436 kubelet[1884]: E0209 13:16:48.597287 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:48.614000 audit[2238]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.614000 audit[2238]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff4fa74db0 a2=0 a3=7fff4fa74d9c items=0 ppid=2177 pid=2238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.614000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 13:16:48.621000 audit[2240]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.621000 audit[2240]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc9804ba90 a2=0 a3=7ffc9804ba7c items=0 ppid=2177 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 13:16:48.630000 audit[2243]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.630000 audit[2243]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcf7e1c880 a2=0 a3=7ffcf7e1c86c items=0 ppid=2177 pid=2243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 13:16:48.632000 audit[2244]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.632000 audit[2244]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff03a42cf0 a2=0 a3=7fff03a42cdc items=0 ppid=2177 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 13:16:48.638000 audit[2246]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.638000 audit[2246]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff4e018c30 a2=0 a3=7fff4e018c1c items=0 ppid=2177 pid=2246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 13:16:48.641000 audit[2247]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.641000 audit[2247]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd19e1c9e0 a2=0 a3=7ffd19e1c9cc items=0 ppid=2177 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.641000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 13:16:48.647000 audit[2249]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.647000 audit[2249]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffedf2108d0 a2=0 a3=7ffedf2108bc items=0 ppid=2177 pid=2249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 13:16:48.656000 audit[2252]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.656000 audit[2252]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc1f7f6cd0 a2=0 a3=7ffc1f7f6cbc items=0 ppid=2177 pid=2252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 13:16:48.659000 audit[2253]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.659000 audit[2253]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4e076970 a2=0 a3=7fff4e07695c items=0 ppid=2177 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 13:16:48.665000 audit[2255]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.665000 audit[2255]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe18468960 a2=0 a3=7ffe1846894c items=0 ppid=2177 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 13:16:48.668000 audit[2256]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.668000 audit[2256]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc580c44e0 a2=0 a3=7ffc580c44cc items=0 ppid=2177 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 13:16:48.674000 audit[2258]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.674000 audit[2258]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff77649a0 a2=0 a3=7ffff776498c items=0 ppid=2177 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.674000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 13:16:48.684000 audit[2261]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.684000 audit[2261]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc164258b0 a2=0 a3=7ffc1642589c items=0 ppid=2177 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 13:16:48.693000 audit[2264]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.693000 audit[2264]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd6d085290 a2=0 a3=7ffd6d08527c items=0 ppid=2177 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 13:16:48.696000 audit[2265]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.696000 audit[2265]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffebb82f430 a2=0 a3=7ffebb82f41c items=0 ppid=2177 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 13:16:48.701000 audit[2267]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.701000 audit[2267]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd2a3bab90 a2=0 a3=7ffd2a3bab7c items=0 ppid=2177 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 13:16:48.710000 audit[2270]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2270 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 13:16:48.710000 audit[2270]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd5fe44110 a2=0 a3=7ffd5fe440fc items=0 ppid=2177 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 13:16:48.735000 audit[2274]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 13:16:48.735000 audit[2274]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffde106b8a0 a2=0 a3=7ffde106b88c items=0 ppid=2177 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:16:48.757000 audit[2274]: NETFILTER_CFG table=nat:59 family=2 entries=24 op=nft_register_chain pid=2274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 13:16:48.757000 audit[2274]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffde106b8a0 a2=0 a3=7ffde106b88c items=0 ppid=2177 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:16:48.760000 audit[2280]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.760000 audit[2280]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff303bc940 a2=0 a3=7fff303bc92c items=0 ppid=2177 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 13:16:48.767000 audit[2282]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.767000 audit[2282]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffec8117350 a2=0 a3=7ffec811733c items=0 ppid=2177 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.767000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 13:16:48.779000 audit[2285]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.779000 audit[2285]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff2c27ef20 a2=0 a3=7fff2c27ef0c items=0 ppid=2177 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 13:16:48.782000 audit[2286]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.782000 audit[2286]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0935b3c0 a2=0 a3=7fff0935b3ac items=0 ppid=2177 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 13:16:48.788000 audit[2288]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.788000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9cf54f30 a2=0 a3=7ffd9cf54f1c items=0 ppid=2177 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 13:16:48.791000 audit[2289]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.791000 audit[2289]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb9a4a3e0 a2=0 a3=7ffdb9a4a3cc items=0 ppid=2177 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 13:16:48.797000 audit[2291]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2291 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.797000 audit[2291]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe4436da70 a2=0 a3=7ffe4436da5c items=0 ppid=2177 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 13:16:48.806000 audit[2294]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.806000 audit[2294]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffffcdae0e0 a2=0 a3=7ffffcdae0cc items=0 ppid=2177 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.806000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 13:16:48.809000 audit[2295]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.809000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd90e8b90 a2=0 a3=7fffd90e8b7c items=0 ppid=2177 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 13:16:48.815000 audit[2297]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.815000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe4cbadb0 a2=0 a3=7fffe4cbad9c items=0 ppid=2177 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.815000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 13:16:48.818000 audit[2298]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.818000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3402b870 a2=0 a3=7ffd3402b85c items=0 ppid=2177 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 13:16:48.824000 audit[2300]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2300 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.824000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9225c440 a2=0 a3=7fff9225c42c items=0 ppid=2177 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 13:16:48.833000 audit[2303]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.833000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcd0b67a50 a2=0 a3=7ffcd0b67a3c items=0 ppid=2177 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 13:16:48.842000 audit[2306]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.842000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6f37c5a0 a2=0 a3=7ffe6f37c58c items=0 ppid=2177 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 13:16:48.845000 audit[2307]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.845000 audit[2307]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff71b7d680 a2=0 a3=7fff71b7d66c items=0 ppid=2177 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 13:16:48.850000 audit[2309]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.850000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc91da4e90 a2=0 a3=7ffc91da4e7c items=0 ppid=2177 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 13:16:48.860000 audit[2312]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 13:16:48.860000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc5b782df0 a2=0 a3=7ffc5b782ddc items=0 ppid=2177 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 13:16:48.873000 audit[2317]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 13:16:48.873000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffddf098760 a2=0 a3=7ffddf09874c items=0 ppid=2177 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.873000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:16:48.875000 audit[2317]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 13:16:48.875000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffddf098760 a2=0 a3=7ffddf09874c items=0 ppid=2177 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:16:48.875000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:16:49.342648 kubelet[1884]: E0209 13:16:49.342526 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:49.425757 kubelet[1884]: I0209 13:16:49.425646 1884 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fplv9" podStartSLOduration=-9.223372017429247e+09 pod.CreationTimestamp="2024-02-09 13:16:30 +0000 UTC" firstStartedPulling="2024-02-09 13:16:37.91158984 +0000 UTC m=+20.571681600" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:16:49.425356748 +0000 UTC m=+32.085448568" watchObservedRunningTime="2024-02-09 13:16:49.425529444 +0000 UTC m=+32.085621246" Feb 9 13:16:49.598405 kubelet[1884]: E0209 13:16:49.598192 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:50.599116 kubelet[1884]: E0209 13:16:50.599002 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:51.341626 kubelet[1884]: E0209 13:16:51.341487 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:51.599417 kubelet[1884]: E0209 13:16:51.599203 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:52.201135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1670730930.mount: Deactivated successfully. Feb 9 13:16:52.600198 kubelet[1884]: E0209 13:16:52.600133 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:53.342191 kubelet[1884]: E0209 13:16:53.342084 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:53.601279 kubelet[1884]: E0209 13:16:53.601087 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:54.601762 kubelet[1884]: E0209 13:16:54.601650 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:55.341843 kubelet[1884]: E0209 13:16:55.341788 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:55.602885 kubelet[1884]: E0209 13:16:55.602706 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:56.603746 kubelet[1884]: E0209 13:16:56.603699 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:57.341343 kubelet[1884]: E0209 13:16:57.341291 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:57.574428 kubelet[1884]: E0209 13:16:57.574325 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:57.604077 kubelet[1884]: E0209 13:16:57.603869 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:58.604483 kubelet[1884]: E0209 13:16:58.604366 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:16:59.341833 kubelet[1884]: E0209 13:16:59.341778 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:16:59.605095 kubelet[1884]: E0209 13:16:59.604887 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:00.605846 kubelet[1884]: E0209 13:17:00.605737 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:01.342138 kubelet[1884]: E0209 13:17:01.342036 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:17:01.606646 kubelet[1884]: E0209 13:17:01.606582 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:02.607518 kubelet[1884]: E0209 13:17:02.607379 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:03.342532 kubelet[1884]: E0209 13:17:03.342456 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:17:03.608702 kubelet[1884]: E0209 13:17:03.608503 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:04.609804 kubelet[1884]: E0209 13:17:04.609733 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:05.342111 kubelet[1884]: E0209 13:17:05.342046 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:17:05.611150 kubelet[1884]: E0209 13:17:05.610933 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:06.611745 kubelet[1884]: E0209 13:17:06.611728 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:07.300966 env[1471]: time="2024-02-09T13:17:07.300872485Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:17:07.302374 env[1471]: time="2024-02-09T13:17:07.302303173Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:17:07.305067 env[1471]: time="2024-02-09T13:17:07.305004829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:17:07.307541 env[1471]: time="2024-02-09T13:17:07.307496187Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:17:07.308823 env[1471]: time="2024-02-09T13:17:07.308786216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 13:17:07.311244 env[1471]: time="2024-02-09T13:17:07.311209143Z" level=info msg="CreateContainer within sandbox \"c3e5489030d3c99d7ebaec5c2f7a18712893215f6b1170900b29442b6ef33684\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 13:17:07.321512 env[1471]: time="2024-02-09T13:17:07.321445182Z" level=info msg="CreateContainer within sandbox \"c3e5489030d3c99d7ebaec5c2f7a18712893215f6b1170900b29442b6ef33684\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a27e6cd2283800426d4e36c6be2d5f4eb67fb1100a4af79ec55d8eca429e790b\"" Feb 9 13:17:07.322009 env[1471]: time="2024-02-09T13:17:07.321971096Z" level=info msg="StartContainer for \"a27e6cd2283800426d4e36c6be2d5f4eb67fb1100a4af79ec55d8eca429e790b\"" Feb 9 13:17:07.341898 kubelet[1884]: E0209 13:17:07.341836 1884 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:17:07.361936 systemd[1]: Started cri-containerd-a27e6cd2283800426d4e36c6be2d5f4eb67fb1100a4af79ec55d8eca429e790b.scope. Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.397941 kernel: kauditd_printk_skb: 209 callbacks suppressed Feb 9 13:17:07.397978 kernel: audit: type=1400 audit(1707484627.370:659): avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=8 items=0 ppid=2015 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:17:07.555871 kernel: audit: type=1300 audit(1707484627.370:659): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=8 items=0 ppid=2015 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:17:07.555937 kernel: audit: type=1327 audit(1707484627.370:659): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132376536636432323833383030343236643465333663366265326435 Feb 9 13:17:07.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132376536636432323833383030343236643465333663366265326435 Feb 9 13:17:07.612020 kubelet[1884]: E0209 13:17:07.612008 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:07.648119 kernel: audit: type=1400 audit(1707484627.370:660): avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710864 kernel: audit: type=1400 audit(1707484627.370:660): avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.773688 kernel: audit: type=1400 audit(1707484627.370:660): avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.836353 kernel: audit: type=1400 audit(1707484627.370:660): avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.964038 kernel: audit: type=1400 audit(1707484627.370:660): avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.964072 kernel: audit: type=1400 audit(1707484627.370:660): avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:08.028083 kernel: audit: type=1400 audit(1707484627.370:660): avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:08.043407 env[1471]: time="2024-02-09T13:17:08.043356972Z" level=info msg="StartContainer for \"a27e6cd2283800426d4e36c6be2d5f4eb67fb1100a4af79ec55d8eca429e790b\" returns successfully" Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.370000 audit: BPF prog-id=75 op=LOAD Feb 9 13:17:07.370000 audit[2326]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001479d8 a2=78 a3=c000255da0 items=0 ppid=2015 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:17:07.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132376536636432323833383030343236643465333663366265326435 Feb 9 13:17:07.555000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.555000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.555000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.555000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.555000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.555000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.555000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.555000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.555000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.555000 audit: BPF prog-id=76 op=LOAD Feb 9 13:17:07.555000 audit[2326]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000147770 a2=78 a3=c000255de8 items=0 ppid=2015 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:17:07.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132376536636432323833383030343236643465333663366265326435 Feb 9 13:17:07.710000 audit: BPF prog-id=76 op=UNLOAD Feb 9 13:17:07.710000 audit: BPF prog-id=75 op=UNLOAD Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { perfmon } for pid=2326 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit[2326]: AVC avc: denied { bpf } for pid=2326 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:17:07.710000 audit: BPF prog-id=77 op=LOAD Feb 9 13:17:07.710000 audit[2326]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000147c30 a2=78 a3=c000255e78 items=0 ppid=2015 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:17:07.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132376536636432323833383030343236643465333663366265326435 Feb 9 13:17:08.612788 kubelet[1884]: E0209 13:17:08.612721 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:08.687876 env[1471]: time="2024-02-09T13:17:08.687732250Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 13:17:08.693017 systemd[1]: cri-containerd-a27e6cd2283800426d4e36c6be2d5f4eb67fb1100a4af79ec55d8eca429e790b.scope: Deactivated successfully. Feb 9 13:17:08.706000 audit: BPF prog-id=77 op=UNLOAD Feb 9 13:17:08.743809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a27e6cd2283800426d4e36c6be2d5f4eb67fb1100a4af79ec55d8eca429e790b-rootfs.mount: Deactivated successfully. Feb 9 13:17:08.773460 kubelet[1884]: I0209 13:17:08.773405 1884 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 13:17:08.792353 kubelet[1884]: I0209 13:17:08.792281 1884 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:17:08.793360 kubelet[1884]: I0209 13:17:08.793268 1884 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:17:08.794153 kubelet[1884]: I0209 13:17:08.794070 1884 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:17:08.806378 systemd[1]: Created slice kubepods-burstable-podc4e7f3db_c090_45e4_97c1_38a20de9b400.slice. Feb 9 13:17:08.817438 kubelet[1884]: I0209 13:17:08.817387 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ptch\" (UniqueName: \"kubernetes.io/projected/c4e7f3db-c090-45e4-97c1-38a20de9b400-kube-api-access-9ptch\") pod \"coredns-787d4945fb-fz782\" (UID: \"c4e7f3db-c090-45e4-97c1-38a20de9b400\") " pod="kube-system/coredns-787d4945fb-fz782" Feb 9 13:17:08.817717 kubelet[1884]: I0209 13:17:08.817516 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7e5efc5-201d-49b9-967f-26a58631682a-tigera-ca-bundle\") pod \"calico-kube-controllers-68c77fd6bd-t8ckd\" (UID: \"d7e5efc5-201d-49b9-967f-26a58631682a\") " pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" Feb 9 13:17:08.817717 kubelet[1884]: I0209 13:17:08.817681 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8ch\" (UniqueName: \"kubernetes.io/projected/d7e5efc5-201d-49b9-967f-26a58631682a-kube-api-access-kc8ch\") pod \"calico-kube-controllers-68c77fd6bd-t8ckd\" (UID: \"d7e5efc5-201d-49b9-967f-26a58631682a\") " pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" Feb 9 13:17:08.818028 kubelet[1884]: I0209 13:17:08.817827 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4e7f3db-c090-45e4-97c1-38a20de9b400-config-volume\") pod \"coredns-787d4945fb-fz782\" (UID: \"c4e7f3db-c090-45e4-97c1-38a20de9b400\") " pod="kube-system/coredns-787d4945fb-fz782" Feb 9 13:17:08.818028 kubelet[1884]: I0209 13:17:08.817936 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40a62a42-1c08-4513-9fc4-544d64d73811-config-volume\") pod \"coredns-787d4945fb-q5v9s\" (UID: \"40a62a42-1c08-4513-9fc4-544d64d73811\") " pod="kube-system/coredns-787d4945fb-q5v9s" Feb 9 13:17:08.818380 kubelet[1884]: I0209 13:17:08.818212 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5j4n\" (UniqueName: \"kubernetes.io/projected/40a62a42-1c08-4513-9fc4-544d64d73811-kube-api-access-m5j4n\") pod \"coredns-787d4945fb-q5v9s\" (UID: \"40a62a42-1c08-4513-9fc4-544d64d73811\") " pod="kube-system/coredns-787d4945fb-q5v9s" Feb 9 13:17:08.831006 systemd[1]: Created slice kubepods-burstable-pod40a62a42_1c08_4513_9fc4_544d64d73811.slice. Feb 9 13:17:08.839355 systemd[1]: Created slice kubepods-besteffort-podd7e5efc5_201d_49b9_967f_26a58631682a.slice. Feb 9 13:17:09.126882 env[1471]: time="2024-02-09T13:17:09.126744202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fz782,Uid:c4e7f3db-c090-45e4-97c1-38a20de9b400,Namespace:kube-system,Attempt:0,}" Feb 9 13:17:09.136759 env[1471]: time="2024-02-09T13:17:09.136653186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q5v9s,Uid:40a62a42-1c08-4513-9fc4-544d64d73811,Namespace:kube-system,Attempt:0,}" Feb 9 13:17:09.144881 env[1471]: time="2024-02-09T13:17:09.144756682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c77fd6bd-t8ckd,Uid:d7e5efc5-201d-49b9-967f-26a58631682a,Namespace:calico-system,Attempt:0,}" Feb 9 13:17:09.362265 systemd[1]: Created slice kubepods-besteffort-pod4f51fc53_a7af_4e05_9116_86df85873e6c.slice. Feb 9 13:17:09.366246 env[1471]: time="2024-02-09T13:17:09.366161588Z" level=info msg="shim disconnected" id=a27e6cd2283800426d4e36c6be2d5f4eb67fb1100a4af79ec55d8eca429e790b Feb 9 13:17:09.366426 env[1471]: time="2024-02-09T13:17:09.366262855Z" level=warning msg="cleaning up after shim disconnected" id=a27e6cd2283800426d4e36c6be2d5f4eb67fb1100a4af79ec55d8eca429e790b namespace=k8s.io Feb 9 13:17:09.366426 env[1471]: time="2024-02-09T13:17:09.366295825Z" level=info msg="cleaning up dead shim" Feb 9 13:17:09.367068 env[1471]: time="2024-02-09T13:17:09.366999171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-72bhh,Uid:4f51fc53-a7af-4e05-9116-86df85873e6c,Namespace:calico-system,Attempt:0,}" Feb 9 13:17:09.390377 env[1471]: time="2024-02-09T13:17:09.390297802Z" level=warning msg="cleanup warnings time=\"2024-02-09T13:17:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2390 runtime=io.containerd.runc.v2\n" Feb 9 13:17:09.400113 env[1471]: time="2024-02-09T13:17:09.400066797Z" level=error msg="Failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400288 env[1471]: time="2024-02-09T13:17:09.400269345Z" level=error msg="Failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400359 env[1471]: time="2024-02-09T13:17:09.400345139Z" level=error msg="encountered an error cleaning up failed sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400395 env[1471]: time="2024-02-09T13:17:09.400376413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q5v9s,Uid:40a62a42-1c08-4513-9fc4-544d64d73811,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400450 env[1471]: time="2024-02-09T13:17:09.400436224Z" level=error msg="encountered an error cleaning up failed sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400484 env[1471]: time="2024-02-09T13:17:09.400460443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fz782,Uid:c4e7f3db-c090-45e4-97c1-38a20de9b400,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400535 kubelet[1884]: E0209 13:17:09.400522 1884 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400585 kubelet[1884]: E0209 13:17:09.400540 1884 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400585 kubelet[1884]: E0209 13:17:09.400567 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-q5v9s" Feb 9 13:17:09.400585 kubelet[1884]: E0209 13:17:09.400571 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-fz782" Feb 9 13:17:09.400585 kubelet[1884]: E0209 13:17:09.400582 1884 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-q5v9s" Feb 9 13:17:09.400704 kubelet[1884]: E0209 13:17:09.400585 1884 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-fz782" Feb 9 13:17:09.400704 kubelet[1884]: E0209 13:17:09.400612 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-fz782_kube-system(c4e7f3db-c090-45e4-97c1-38a20de9b400)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-fz782_kube-system(c4e7f3db-c090-45e4-97c1-38a20de9b400)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:17:09.400704 kubelet[1884]: E0209 13:17:09.400613 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-q5v9s_kube-system(40a62a42-1c08-4513-9fc4-544d64d73811)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-q5v9s_kube-system(40a62a42-1c08-4513-9fc4-544d64d73811)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:17:09.400802 env[1471]: time="2024-02-09T13:17:09.400780348Z" level=error msg="Failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400839 env[1471]: time="2024-02-09T13:17:09.400820732Z" level=error msg="Failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400944 env[1471]: time="2024-02-09T13:17:09.400929901Z" level=error msg="encountered an error cleaning up failed sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400966 env[1471]: time="2024-02-09T13:17:09.400952891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-72bhh,Uid:4f51fc53-a7af-4e05-9116-86df85873e6c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.400992 env[1471]: time="2024-02-09T13:17:09.400972902Z" level=error msg="encountered an error cleaning up failed sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.401023 env[1471]: time="2024-02-09T13:17:09.400995323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c77fd6bd-t8ckd,Uid:d7e5efc5-201d-49b9-967f-26a58631682a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.401066 kubelet[1884]: E0209 13:17:09.401019 1884 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.401066 kubelet[1884]: E0209 13:17:09.401036 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-72bhh" Feb 9 13:17:09.401066 kubelet[1884]: E0209 13:17:09.401047 1884 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-72bhh" Feb 9 13:17:09.401066 kubelet[1884]: E0209 13:17:09.401056 1884 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.401147 kubelet[1884]: E0209 13:17:09.401073 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" Feb 9 13:17:09.401147 kubelet[1884]: E0209 13:17:09.401087 1884 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" Feb 9 13:17:09.401147 kubelet[1884]: E0209 13:17:09.401108 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68c77fd6bd-t8ckd_calico-system(d7e5efc5-201d-49b9-967f-26a58631682a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68c77fd6bd-t8ckd_calico-system(d7e5efc5-201d-49b9-967f-26a58631682a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:17:09.401222 kubelet[1884]: E0209 13:17:09.401074 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-72bhh_calico-system(4f51fc53-a7af-4e05-9116-86df85873e6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-72bhh_calico-system(4f51fc53-a7af-4e05-9116-86df85873e6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:17:09.450490 kubelet[1884]: I0209 13:17:09.450420 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:17:09.456129 kubelet[1884]: I0209 13:17:09.456075 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:17:09.456450 env[1471]: time="2024-02-09T13:17:09.456386697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 13:17:09.457366 env[1471]: time="2024-02-09T13:17:09.457298693Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:17:09.458113 kubelet[1884]: I0209 13:17:09.458028 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:17:09.459175 env[1471]: time="2024-02-09T13:17:09.459076149Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:17:09.460025 kubelet[1884]: I0209 13:17:09.459969 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:17:09.461047 env[1471]: time="2024-02-09T13:17:09.460980487Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:17:09.464355 env[1471]: time="2024-02-09T13:17:09.464216843Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:17:09.500921 env[1471]: time="2024-02-09T13:17:09.500861449Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.501119 kubelet[1884]: E0209 13:17:09.501095 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:17:09.501225 kubelet[1884]: E0209 13:17:09.501151 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:17:09.501225 kubelet[1884]: E0209 13:17:09.501193 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:09.501225 kubelet[1884]: E0209 13:17:09.501223 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:17:09.501489 kubelet[1884]: E0209 13:17:09.501408 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:17:09.501489 kubelet[1884]: E0209 13:17:09.501442 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:17:09.501638 env[1471]: time="2024-02-09T13:17:09.501239772Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.501709 kubelet[1884]: E0209 13:17:09.501499 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:09.501709 kubelet[1884]: E0209 13:17:09.501540 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:17:09.504651 env[1471]: time="2024-02-09T13:17:09.504614877Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.504788 kubelet[1884]: E0209 13:17:09.504771 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:17:09.504876 kubelet[1884]: E0209 13:17:09.504795 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:17:09.504876 kubelet[1884]: E0209 13:17:09.504826 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:09.504876 kubelet[1884]: E0209 13:17:09.504851 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:17:09.505914 env[1471]: time="2024-02-09T13:17:09.505876852Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:09.506054 kubelet[1884]: E0209 13:17:09.506038 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:17:09.506107 kubelet[1884]: E0209 13:17:09.506068 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:17:09.506107 kubelet[1884]: E0209 13:17:09.506107 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:09.506202 kubelet[1884]: E0209 13:17:09.506132 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:17:09.613827 kubelet[1884]: E0209 13:17:09.613726 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:09.746347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce-shm.mount: Deactivated successfully. Feb 9 13:17:09.746594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a-shm.mount: Deactivated successfully. Feb 9 13:17:10.614586 kubelet[1884]: E0209 13:17:10.614431 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:11.614744 kubelet[1884]: E0209 13:17:11.614625 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:12.615280 kubelet[1884]: E0209 13:17:12.615172 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:13.616207 kubelet[1884]: E0209 13:17:13.616092 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:14.617119 kubelet[1884]: E0209 13:17:14.617005 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:15.618139 kubelet[1884]: E0209 13:17:15.618035 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:16.618340 kubelet[1884]: E0209 13:17:16.618234 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:17.574396 kubelet[1884]: E0209 13:17:17.574296 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:17.619512 kubelet[1884]: E0209 13:17:17.619407 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:18.619758 kubelet[1884]: E0209 13:17:18.619651 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:19.619907 kubelet[1884]: E0209 13:17:19.619799 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:20.620637 kubelet[1884]: E0209 13:17:20.620537 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:21.342818 env[1471]: time="2024-02-09T13:17:21.342608716Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:17:21.369694 env[1471]: time="2024-02-09T13:17:21.369615311Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:21.369841 kubelet[1884]: E0209 13:17:21.369798 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:17:21.369841 kubelet[1884]: E0209 13:17:21.369825 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:17:21.369908 kubelet[1884]: E0209 13:17:21.369848 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:21.369908 kubelet[1884]: E0209 13:17:21.369866 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:17:21.621055 kubelet[1884]: E0209 13:17:21.620832 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:22.343372 env[1471]: time="2024-02-09T13:17:22.343247124Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:17:22.348297 env[1471]: time="2024-02-09T13:17:22.348203575Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:17:22.358976 env[1471]: time="2024-02-09T13:17:22.358936347Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:22.359078 kubelet[1884]: E0209 13:17:22.359063 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:17:22.359115 kubelet[1884]: E0209 13:17:22.359086 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:17:22.359115 kubelet[1884]: E0209 13:17:22.359111 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:22.359185 kubelet[1884]: E0209 13:17:22.359129 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:17:22.361045 env[1471]: time="2024-02-09T13:17:22.360987202Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:22.361126 kubelet[1884]: E0209 13:17:22.361095 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:17:22.361126 kubelet[1884]: E0209 13:17:22.361112 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:17:22.361184 kubelet[1884]: E0209 13:17:22.361133 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:22.361184 kubelet[1884]: E0209 13:17:22.361153 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:17:22.622207 kubelet[1884]: E0209 13:17:22.621996 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:23.622724 kubelet[1884]: E0209 13:17:23.622617 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:24.343601 env[1471]: time="2024-02-09T13:17:24.343472719Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:17:24.370126 env[1471]: time="2024-02-09T13:17:24.370093292Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:24.370301 kubelet[1884]: E0209 13:17:24.370277 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:17:24.370342 kubelet[1884]: E0209 13:17:24.370316 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:17:24.370342 kubelet[1884]: E0209 13:17:24.370337 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:24.370415 kubelet[1884]: E0209 13:17:24.370355 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:17:24.623426 kubelet[1884]: E0209 13:17:24.623204 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:25.623973 kubelet[1884]: E0209 13:17:25.623857 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:26.624762 kubelet[1884]: E0209 13:17:26.624661 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:27.625852 kubelet[1884]: E0209 13:17:27.625745 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:28.625991 kubelet[1884]: E0209 13:17:28.625882 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:29.626255 kubelet[1884]: E0209 13:17:29.626148 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:30.627503 kubelet[1884]: E0209 13:17:30.627391 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:31.627669 kubelet[1884]: E0209 13:17:31.627588 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:32.628184 kubelet[1884]: E0209 13:17:32.628076 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:33.629178 kubelet[1884]: E0209 13:17:33.629062 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:34.629802 kubelet[1884]: E0209 13:17:34.629701 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:35.630722 kubelet[1884]: E0209 13:17:35.630612 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:36.342860 env[1471]: time="2024-02-09T13:17:36.342767247Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:17:36.372366 env[1471]: time="2024-02-09T13:17:36.372291313Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:36.372516 kubelet[1884]: E0209 13:17:36.372506 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:17:36.372568 kubelet[1884]: E0209 13:17:36.372532 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:17:36.372568 kubelet[1884]: E0209 13:17:36.372557 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:36.372664 kubelet[1884]: E0209 13:17:36.372575 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:17:36.631075 kubelet[1884]: E0209 13:17:36.630857 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:37.342734 env[1471]: time="2024-02-09T13:17:37.342604936Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:17:37.343040 env[1471]: time="2024-02-09T13:17:37.342719434Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:17:37.369311 env[1471]: time="2024-02-09T13:17:37.369262949Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:37.369462 env[1471]: time="2024-02-09T13:17:37.369376335Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:37.369503 kubelet[1884]: E0209 13:17:37.369452 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:17:37.369503 kubelet[1884]: E0209 13:17:37.369464 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:17:37.369503 kubelet[1884]: E0209 13:17:37.369478 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:17:37.369503 kubelet[1884]: E0209 13:17:37.369479 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:17:37.369503 kubelet[1884]: E0209 13:17:37.369501 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:37.369692 kubelet[1884]: E0209 13:17:37.369514 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:37.369692 kubelet[1884]: E0209 13:17:37.369518 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:17:37.369692 kubelet[1884]: E0209 13:17:37.369540 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:17:37.573866 kubelet[1884]: E0209 13:17:37.573799 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:37.631489 kubelet[1884]: E0209 13:17:37.631302 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:38.342960 env[1471]: time="2024-02-09T13:17:38.342831021Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:17:38.369285 env[1471]: time="2024-02-09T13:17:38.369200351Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:38.369523 kubelet[1884]: E0209 13:17:38.369389 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:17:38.369523 kubelet[1884]: E0209 13:17:38.369413 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:17:38.369523 kubelet[1884]: E0209 13:17:38.369433 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:38.369523 kubelet[1884]: E0209 13:17:38.369451 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:17:38.632773 kubelet[1884]: E0209 13:17:38.632536 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:39.632808 kubelet[1884]: E0209 13:17:39.632703 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:40.633657 kubelet[1884]: E0209 13:17:40.633537 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:41.633842 kubelet[1884]: E0209 13:17:41.633746 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:42.634155 kubelet[1884]: E0209 13:17:42.634104 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:43.635113 kubelet[1884]: E0209 13:17:43.635003 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:44.635527 kubelet[1884]: E0209 13:17:44.635421 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:45.635973 kubelet[1884]: E0209 13:17:45.635863 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:46.637030 kubelet[1884]: E0209 13:17:46.636917 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:47.637505 kubelet[1884]: E0209 13:17:47.637400 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:48.638088 kubelet[1884]: E0209 13:17:48.637981 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:49.638187 kubelet[1884]: E0209 13:17:49.638106 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:50.342814 env[1471]: time="2024-02-09T13:17:50.342700550Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:17:50.360423 env[1471]: time="2024-02-09T13:17:50.360358762Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:50.360518 kubelet[1884]: E0209 13:17:50.360479 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:17:50.360518 kubelet[1884]: E0209 13:17:50.360503 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:17:50.360584 kubelet[1884]: E0209 13:17:50.360524 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:50.360584 kubelet[1884]: E0209 13:17:50.360542 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:17:50.639032 kubelet[1884]: E0209 13:17:50.638816 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:51.342603 env[1471]: time="2024-02-09T13:17:51.342441979Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:17:51.369372 env[1471]: time="2024-02-09T13:17:51.369304421Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:51.369568 kubelet[1884]: E0209 13:17:51.369449 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:17:51.369568 kubelet[1884]: E0209 13:17:51.369474 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:17:51.369568 kubelet[1884]: E0209 13:17:51.369496 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:51.369568 kubelet[1884]: E0209 13:17:51.369515 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:17:51.639754 kubelet[1884]: E0209 13:17:51.639563 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:52.343236 env[1471]: time="2024-02-09T13:17:52.343147856Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:17:52.343236 env[1471]: time="2024-02-09T13:17:52.343156713Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:17:52.370001 env[1471]: time="2024-02-09T13:17:52.369941362Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:52.370001 env[1471]: time="2024-02-09T13:17:52.369943410Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:17:52.370297 kubelet[1884]: E0209 13:17:52.370106 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:17:52.370297 kubelet[1884]: E0209 13:17:52.370133 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:17:52.370297 kubelet[1884]: E0209 13:17:52.370156 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:52.370297 kubelet[1884]: E0209 13:17:52.370174 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:17:52.370427 kubelet[1884]: E0209 13:17:52.370108 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:17:52.370427 kubelet[1884]: E0209 13:17:52.370194 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:17:52.370427 kubelet[1884]: E0209 13:17:52.370223 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:17:52.370427 kubelet[1884]: E0209 13:17:52.370244 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:17:52.640294 kubelet[1884]: E0209 13:17:52.640068 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:53.640762 kubelet[1884]: E0209 13:17:53.640652 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:54.641600 kubelet[1884]: E0209 13:17:54.641498 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:55.642230 kubelet[1884]: E0209 13:17:55.642118 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:56.642892 kubelet[1884]: E0209 13:17:56.642824 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:57.574008 kubelet[1884]: E0209 13:17:57.573937 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:57.643186 kubelet[1884]: E0209 13:17:57.643106 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:58.644454 kubelet[1884]: E0209 13:17:58.644374 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:17:59.645738 kubelet[1884]: E0209 13:17:59.645629 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:00.645890 kubelet[1884]: E0209 13:18:00.645812 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:01.342608 env[1471]: time="2024-02-09T13:18:01.342437909Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:18:01.368689 env[1471]: time="2024-02-09T13:18:01.368603105Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:01.368781 kubelet[1884]: E0209 13:18:01.368772 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:18:01.368819 kubelet[1884]: E0209 13:18:01.368793 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:18:01.368819 kubelet[1884]: E0209 13:18:01.368813 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:01.368881 kubelet[1884]: E0209 13:18:01.368828 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:18:01.646865 kubelet[1884]: E0209 13:18:01.646658 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:02.647525 kubelet[1884]: E0209 13:18:02.647443 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:03.343073 env[1471]: time="2024-02-09T13:18:03.342945982Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:18:03.371406 env[1471]: time="2024-02-09T13:18:03.371342977Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:03.371501 kubelet[1884]: E0209 13:18:03.371476 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:18:03.371535 kubelet[1884]: E0209 13:18:03.371502 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:18:03.371535 kubelet[1884]: E0209 13:18:03.371523 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:03.371611 kubelet[1884]: E0209 13:18:03.371540 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:18:03.647934 kubelet[1884]: E0209 13:18:03.647705 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:04.648508 kubelet[1884]: E0209 13:18:04.648403 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:05.648844 kubelet[1884]: E0209 13:18:05.648726 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:06.343087 env[1471]: time="2024-02-09T13:18:06.342962104Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:18:06.369744 env[1471]: time="2024-02-09T13:18:06.369683211Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:06.369895 kubelet[1884]: E0209 13:18:06.369854 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:18:06.369895 kubelet[1884]: E0209 13:18:06.369879 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:18:06.369959 kubelet[1884]: E0209 13:18:06.369900 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:06.369959 kubelet[1884]: E0209 13:18:06.369918 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:18:06.650015 kubelet[1884]: E0209 13:18:06.649793 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:07.342916 env[1471]: time="2024-02-09T13:18:07.342781500Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:18:07.368807 env[1471]: time="2024-02-09T13:18:07.368752333Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:07.368996 kubelet[1884]: E0209 13:18:07.368905 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:18:07.368996 kubelet[1884]: E0209 13:18:07.368933 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:18:07.368996 kubelet[1884]: E0209 13:18:07.368953 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:07.368996 kubelet[1884]: E0209 13:18:07.368971 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:18:07.650634 kubelet[1884]: E0209 13:18:07.650414 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:08.651103 kubelet[1884]: E0209 13:18:08.650988 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:09.651755 kubelet[1884]: E0209 13:18:09.651649 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:10.652729 kubelet[1884]: E0209 13:18:10.652621 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:11.653603 kubelet[1884]: E0209 13:18:11.653518 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:12.654820 kubelet[1884]: E0209 13:18:12.654719 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:13.655980 kubelet[1884]: E0209 13:18:13.655860 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:14.656113 kubelet[1884]: E0209 13:18:14.656000 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:15.656239 kubelet[1884]: E0209 13:18:15.656162 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:16.342890 env[1471]: time="2024-02-09T13:18:16.342788281Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:18:16.369923 env[1471]: time="2024-02-09T13:18:16.369871648Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:16.370096 kubelet[1884]: E0209 13:18:16.370082 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:18:16.370132 kubelet[1884]: E0209 13:18:16.370107 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:18:16.370154 kubelet[1884]: E0209 13:18:16.370132 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:16.370154 kubelet[1884]: E0209 13:18:16.370147 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:18:16.657414 kubelet[1884]: E0209 13:18:16.657191 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:17.342677 env[1471]: time="2024-02-09T13:18:17.342530846Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:18:17.369393 env[1471]: time="2024-02-09T13:18:17.369314796Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:17.369645 kubelet[1884]: E0209 13:18:17.369485 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:18:17.369645 kubelet[1884]: E0209 13:18:17.369511 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:18:17.369645 kubelet[1884]: E0209 13:18:17.369534 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:17.369645 kubelet[1884]: E0209 13:18:17.369557 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:18:17.574163 kubelet[1884]: E0209 13:18:17.574060 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:17.658340 kubelet[1884]: E0209 13:18:17.658120 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:18.353439 env[1471]: time="2024-02-09T13:18:18.353337145Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:18:18.382051 env[1471]: time="2024-02-09T13:18:18.381993002Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:18.382247 kubelet[1884]: E0209 13:18:18.382149 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:18:18.382247 kubelet[1884]: E0209 13:18:18.382173 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:18:18.382247 kubelet[1884]: E0209 13:18:18.382192 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:18.382247 kubelet[1884]: E0209 13:18:18.382210 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:18:18.659330 kubelet[1884]: E0209 13:18:18.659107 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:19.659913 kubelet[1884]: E0209 13:18:19.659812 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:20.660075 kubelet[1884]: E0209 13:18:20.659958 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:21.343572 env[1471]: time="2024-02-09T13:18:21.343418790Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:18:21.369929 env[1471]: time="2024-02-09T13:18:21.369852925Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:21.370094 kubelet[1884]: E0209 13:18:21.370070 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:18:21.370135 kubelet[1884]: E0209 13:18:21.370110 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:18:21.370135 kubelet[1884]: E0209 13:18:21.370131 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:21.370207 kubelet[1884]: E0209 13:18:21.370149 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:18:21.660831 kubelet[1884]: E0209 13:18:21.660620 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:22.661831 kubelet[1884]: E0209 13:18:22.661769 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:23.662281 kubelet[1884]: E0209 13:18:23.662170 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:24.662725 kubelet[1884]: E0209 13:18:24.662625 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:25.663733 kubelet[1884]: E0209 13:18:25.663626 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:26.663881 kubelet[1884]: E0209 13:18:26.663779 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:27.664251 kubelet[1884]: E0209 13:18:27.664149 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:28.665123 kubelet[1884]: E0209 13:18:28.665019 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:29.343198 env[1471]: time="2024-02-09T13:18:29.343078205Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:18:29.369317 env[1471]: time="2024-02-09T13:18:29.369239211Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:29.369435 kubelet[1884]: E0209 13:18:29.369426 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:18:29.369466 kubelet[1884]: E0209 13:18:29.369451 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:18:29.369487 kubelet[1884]: E0209 13:18:29.369473 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:29.369533 kubelet[1884]: E0209 13:18:29.369490 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:18:29.666382 kubelet[1884]: E0209 13:18:29.666167 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:30.343174 env[1471]: time="2024-02-09T13:18:30.343043161Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:18:30.371836 env[1471]: time="2024-02-09T13:18:30.371781029Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:30.372130 kubelet[1884]: E0209 13:18:30.372026 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:18:30.372130 kubelet[1884]: E0209 13:18:30.372082 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:18:30.372130 kubelet[1884]: E0209 13:18:30.372101 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:30.372130 kubelet[1884]: E0209 13:18:30.372121 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:18:30.667121 kubelet[1884]: E0209 13:18:30.666911 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:31.343606 env[1471]: time="2024-02-09T13:18:31.343448636Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:18:31.369486 env[1471]: time="2024-02-09T13:18:31.369454396Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:31.369612 kubelet[1884]: E0209 13:18:31.369597 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:18:31.369659 kubelet[1884]: E0209 13:18:31.369632 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:18:31.369693 kubelet[1884]: E0209 13:18:31.369666 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:31.369744 kubelet[1884]: E0209 13:18:31.369693 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:18:31.667967 kubelet[1884]: E0209 13:18:31.667747 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:32.342569 env[1471]: time="2024-02-09T13:18:32.342455506Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:18:32.368420 env[1471]: time="2024-02-09T13:18:32.368363688Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:32.368539 kubelet[1884]: E0209 13:18:32.368529 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:18:32.368588 kubelet[1884]: E0209 13:18:32.368562 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:18:32.368624 kubelet[1884]: E0209 13:18:32.368596 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:32.368624 kubelet[1884]: E0209 13:18:32.368623 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:18:32.668834 kubelet[1884]: E0209 13:18:32.668618 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:33.669803 kubelet[1884]: E0209 13:18:33.669703 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:34.670393 kubelet[1884]: E0209 13:18:34.670275 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:35.671615 kubelet[1884]: E0209 13:18:35.671507 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:36.671873 kubelet[1884]: E0209 13:18:36.671764 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:37.574369 kubelet[1884]: E0209 13:18:37.574260 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:37.672569 kubelet[1884]: E0209 13:18:37.672436 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:38.673170 kubelet[1884]: E0209 13:18:38.673061 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:39.674005 kubelet[1884]: E0209 13:18:39.673892 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:40.674485 kubelet[1884]: E0209 13:18:40.674380 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:41.342929 env[1471]: time="2024-02-09T13:18:41.342784993Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:18:41.368869 env[1471]: time="2024-02-09T13:18:41.368831805Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:41.369095 kubelet[1884]: E0209 13:18:41.369051 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:18:41.369095 kubelet[1884]: E0209 13:18:41.369074 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:18:41.369095 kubelet[1884]: E0209 13:18:41.369095 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:41.369204 kubelet[1884]: E0209 13:18:41.369113 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:18:41.675526 kubelet[1884]: E0209 13:18:41.675311 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:42.342650 env[1471]: time="2024-02-09T13:18:42.342525320Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:18:42.356927 env[1471]: time="2024-02-09T13:18:42.356889461Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:42.357150 kubelet[1884]: E0209 13:18:42.357066 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:18:42.357150 kubelet[1884]: E0209 13:18:42.357093 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:18:42.357150 kubelet[1884]: E0209 13:18:42.357118 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:42.357150 kubelet[1884]: E0209 13:18:42.357138 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:18:42.676474 kubelet[1884]: E0209 13:18:42.676260 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:43.677142 kubelet[1884]: E0209 13:18:43.677035 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:44.677611 kubelet[1884]: E0209 13:18:44.677502 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:45.343572 env[1471]: time="2024-02-09T13:18:45.343462491Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:18:45.343572 env[1471]: time="2024-02-09T13:18:45.343475514Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:18:45.369820 env[1471]: time="2024-02-09T13:18:45.369762925Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:45.369820 env[1471]: time="2024-02-09T13:18:45.369790034Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:45.370071 kubelet[1884]: E0209 13:18:45.370014 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:18:45.370071 kubelet[1884]: E0209 13:18:45.370053 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:18:45.370165 kubelet[1884]: E0209 13:18:45.370076 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:45.370165 kubelet[1884]: E0209 13:18:45.370094 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:18:45.370165 kubelet[1884]: E0209 13:18:45.370013 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:18:45.370165 kubelet[1884]: E0209 13:18:45.370129 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:18:45.370289 kubelet[1884]: E0209 13:18:45.370163 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:45.370289 kubelet[1884]: E0209 13:18:45.370176 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:18:45.678356 kubelet[1884]: E0209 13:18:45.678130 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:46.678435 kubelet[1884]: E0209 13:18:46.678327 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:47.678612 kubelet[1884]: E0209 13:18:47.678507 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:48.679609 kubelet[1884]: E0209 13:18:48.679454 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:49.679727 kubelet[1884]: E0209 13:18:49.679609 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:50.680842 kubelet[1884]: E0209 13:18:50.680731 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:51.681345 kubelet[1884]: E0209 13:18:51.681177 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:52.682479 kubelet[1884]: E0209 13:18:52.682407 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:53.683487 kubelet[1884]: E0209 13:18:53.683379 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:54.343383 env[1471]: time="2024-02-09T13:18:54.343289062Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:18:54.369077 env[1471]: time="2024-02-09T13:18:54.369017023Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:54.369235 kubelet[1884]: E0209 13:18:54.369182 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:18:54.369235 kubelet[1884]: E0209 13:18:54.369206 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:18:54.369235 kubelet[1884]: E0209 13:18:54.369229 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:54.369329 kubelet[1884]: E0209 13:18:54.369245 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:18:54.684472 kubelet[1884]: E0209 13:18:54.684255 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:55.685375 kubelet[1884]: E0209 13:18:55.685267 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:56.342772 env[1471]: time="2024-02-09T13:18:56.342652985Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:18:56.369029 env[1471]: time="2024-02-09T13:18:56.368994253Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:56.369204 kubelet[1884]: E0209 13:18:56.369194 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:18:56.369241 kubelet[1884]: E0209 13:18:56.369219 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:18:56.369241 kubelet[1884]: E0209 13:18:56.369241 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:56.369300 kubelet[1884]: E0209 13:18:56.369257 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:18:56.686448 kubelet[1884]: E0209 13:18:56.686219 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:57.574382 kubelet[1884]: E0209 13:18:57.574279 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:57.687520 kubelet[1884]: E0209 13:18:57.687411 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:58.342986 env[1471]: time="2024-02-09T13:18:58.342856156Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:18:58.358432 env[1471]: time="2024-02-09T13:18:58.358397030Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:18:58.358607 kubelet[1884]: E0209 13:18:58.358570 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:18:58.358607 kubelet[1884]: E0209 13:18:58.358597 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:18:58.358689 kubelet[1884]: E0209 13:18:58.358622 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:18:58.358689 kubelet[1884]: E0209 13:18:58.358643 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:18:58.688089 kubelet[1884]: E0209 13:18:58.687972 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:18:59.688319 kubelet[1884]: E0209 13:18:59.688212 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:00.343530 env[1471]: time="2024-02-09T13:19:00.343472306Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:19:00.356180 env[1471]: time="2024-02-09T13:19:00.356118328Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:00.356277 kubelet[1884]: E0209 13:19:00.356262 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:19:00.356313 kubelet[1884]: E0209 13:19:00.356285 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:19:00.356313 kubelet[1884]: E0209 13:19:00.356311 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:00.356377 kubelet[1884]: E0209 13:19:00.356330 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:19:00.689145 kubelet[1884]: E0209 13:19:00.688928 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:01.689251 kubelet[1884]: E0209 13:19:01.689129 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:02.690087 kubelet[1884]: E0209 13:19:02.689985 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:03.690839 kubelet[1884]: E0209 13:19:03.690732 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:04.691298 kubelet[1884]: E0209 13:19:04.691188 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:05.691812 kubelet[1884]: E0209 13:19:05.691709 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:06.692081 kubelet[1884]: E0209 13:19:06.691977 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:07.343468 env[1471]: time="2024-02-09T13:19:07.343327675Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:19:07.370145 env[1471]: time="2024-02-09T13:19:07.370107895Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:07.370277 kubelet[1884]: E0209 13:19:07.370239 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:19:07.370277 kubelet[1884]: E0209 13:19:07.370267 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:19:07.370345 kubelet[1884]: E0209 13:19:07.370289 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:07.370345 kubelet[1884]: E0209 13:19:07.370306 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:19:07.692946 kubelet[1884]: E0209 13:19:07.692722 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:08.693378 kubelet[1884]: E0209 13:19:08.693266 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:09.343059 env[1471]: time="2024-02-09T13:19:09.342927958Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:19:09.343059 env[1471]: time="2024-02-09T13:19:09.342969719Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:19:09.369446 env[1471]: time="2024-02-09T13:19:09.369408952Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:09.369591 kubelet[1884]: E0209 13:19:09.369567 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:19:09.369653 kubelet[1884]: E0209 13:19:09.369608 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:19:09.369653 kubelet[1884]: E0209 13:19:09.369649 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:09.369734 env[1471]: time="2024-02-09T13:19:09.369567204Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:09.369773 kubelet[1884]: E0209 13:19:09.369667 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:19:09.369773 kubelet[1884]: E0209 13:19:09.369681 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:19:09.369773 kubelet[1884]: E0209 13:19:09.369709 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:19:09.369773 kubelet[1884]: E0209 13:19:09.369726 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:09.369873 kubelet[1884]: E0209 13:19:09.369739 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:19:09.693709 kubelet[1884]: E0209 13:19:09.693497 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:10.694374 kubelet[1884]: E0209 13:19:10.694263 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:11.343078 env[1471]: time="2024-02-09T13:19:11.342984252Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:19:11.369897 env[1471]: time="2024-02-09T13:19:11.369864136Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:11.370075 kubelet[1884]: E0209 13:19:11.370066 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:19:11.370110 kubelet[1884]: E0209 13:19:11.370091 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:19:11.370136 kubelet[1884]: E0209 13:19:11.370112 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:11.370136 kubelet[1884]: E0209 13:19:11.370129 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:19:11.694890 kubelet[1884]: E0209 13:19:11.694665 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:12.695758 kubelet[1884]: E0209 13:19:12.695651 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:13.696930 kubelet[1884]: E0209 13:19:13.696820 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:14.697370 kubelet[1884]: E0209 13:19:14.697251 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:15.697613 kubelet[1884]: E0209 13:19:15.697506 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:16.698302 kubelet[1884]: E0209 13:19:16.698172 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:17.573982 kubelet[1884]: E0209 13:19:17.573873 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:17.698937 kubelet[1884]: E0209 13:19:17.698829 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:18.699292 kubelet[1884]: E0209 13:19:18.699177 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:19.699900 kubelet[1884]: E0209 13:19:19.699788 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:20.342858 env[1471]: time="2024-02-09T13:19:20.342733886Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:19:20.369254 env[1471]: time="2024-02-09T13:19:20.369193726Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:20.369398 kubelet[1884]: E0209 13:19:20.369358 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:19:20.369398 kubelet[1884]: E0209 13:19:20.369386 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:19:20.369455 kubelet[1884]: E0209 13:19:20.369409 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:20.369455 kubelet[1884]: E0209 13:19:20.369426 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:19:20.700166 kubelet[1884]: E0209 13:19:20.699945 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:21.700285 kubelet[1884]: E0209 13:19:21.700170 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:22.354588 env[1471]: time="2024-02-09T13:19:22.354444885Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:19:22.405436 env[1471]: time="2024-02-09T13:19:22.405348845Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:22.405665 kubelet[1884]: E0209 13:19:22.405602 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:19:22.405665 kubelet[1884]: E0209 13:19:22.405650 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:19:22.405809 kubelet[1884]: E0209 13:19:22.405700 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:22.405809 kubelet[1884]: E0209 13:19:22.405740 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:19:22.701506 kubelet[1884]: E0209 13:19:22.701278 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:23.702362 kubelet[1884]: E0209 13:19:23.702252 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:24.343766 env[1471]: time="2024-02-09T13:19:24.343644245Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:19:24.343766 env[1471]: time="2024-02-09T13:19:24.343691162Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:19:24.372079 env[1471]: time="2024-02-09T13:19:24.372022693Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:24.372266 env[1471]: time="2024-02-09T13:19:24.372022506Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:24.372309 kubelet[1884]: E0209 13:19:24.372272 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:19:24.372342 kubelet[1884]: E0209 13:19:24.372316 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:19:24.372342 kubelet[1884]: E0209 13:19:24.372337 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:24.372404 kubelet[1884]: E0209 13:19:24.372354 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:19:24.372404 kubelet[1884]: E0209 13:19:24.372272 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:19:24.372404 kubelet[1884]: E0209 13:19:24.372371 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:19:24.372404 kubelet[1884]: E0209 13:19:24.372390 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:24.372510 kubelet[1884]: E0209 13:19:24.372404 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:19:24.702716 kubelet[1884]: E0209 13:19:24.702455 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:25.702993 kubelet[1884]: E0209 13:19:25.702882 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:26.703782 kubelet[1884]: E0209 13:19:26.703677 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:27.704469 kubelet[1884]: E0209 13:19:27.704372 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:28.705319 kubelet[1884]: E0209 13:19:28.705213 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:29.705578 kubelet[1884]: E0209 13:19:29.705444 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:30.706349 kubelet[1884]: E0209 13:19:30.706239 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:31.707518 kubelet[1884]: E0209 13:19:31.707416 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:32.707741 kubelet[1884]: E0209 13:19:32.707638 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:33.343456 env[1471]: time="2024-02-09T13:19:33.343327871Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:19:33.372689 env[1471]: time="2024-02-09T13:19:33.372557501Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:33.372839 kubelet[1884]: E0209 13:19:33.372778 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:19:33.372839 kubelet[1884]: E0209 13:19:33.372837 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:19:33.372914 kubelet[1884]: E0209 13:19:33.372860 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:33.372914 kubelet[1884]: E0209 13:19:33.372891 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:19:33.708430 kubelet[1884]: E0209 13:19:33.708212 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:34.709382 kubelet[1884]: E0209 13:19:34.709265 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:35.343046 env[1471]: time="2024-02-09T13:19:35.342913190Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:19:35.369268 env[1471]: time="2024-02-09T13:19:35.369209031Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:35.369427 kubelet[1884]: E0209 13:19:35.369382 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:19:35.369427 kubelet[1884]: E0209 13:19:35.369407 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:19:35.369487 kubelet[1884]: E0209 13:19:35.369436 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:35.369487 kubelet[1884]: E0209 13:19:35.369454 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:19:35.710735 kubelet[1884]: E0209 13:19:35.710499 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:35.758780 kubelet[1884]: I0209 13:19:35.758676 1884 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:19:35.782051 systemd[1]: Created slice kubepods-besteffort-pod1260a1b8_082b_4e29_998b_0de8b311e19f.slice. Feb 9 13:19:35.848836 kubelet[1884]: I0209 13:19:35.848762 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz9zj\" (UniqueName: \"kubernetes.io/projected/1260a1b8-082b-4e29-998b-0de8b311e19f-kube-api-access-vz9zj\") pod \"nginx-deployment-8ffc5cf85-bhft8\" (UID: \"1260a1b8-082b-4e29-998b-0de8b311e19f\") " pod="default/nginx-deployment-8ffc5cf85-bhft8" Feb 9 13:19:36.086147 env[1471]: time="2024-02-09T13:19:36.086011369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-bhft8,Uid:1260a1b8-082b-4e29-998b-0de8b311e19f,Namespace:default,Attempt:0,}" Feb 9 13:19:36.128282 env[1471]: time="2024-02-09T13:19:36.128206914Z" level=error msg="Failed to destroy network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:36.128435 env[1471]: time="2024-02-09T13:19:36.128417717Z" level=error msg="encountered an error cleaning up failed sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:36.128489 env[1471]: time="2024-02-09T13:19:36.128450387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-bhft8,Uid:1260a1b8-082b-4e29-998b-0de8b311e19f,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:36.128616 kubelet[1884]: E0209 13:19:36.128600 1884 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:36.128675 kubelet[1884]: E0209 13:19:36.128638 1884 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-bhft8" Feb 9 13:19:36.128675 kubelet[1884]: E0209 13:19:36.128653 1884 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-bhft8" Feb 9 13:19:36.128759 kubelet[1884]: E0209 13:19:36.128687 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-bhft8_default(1260a1b8-082b-4e29-998b-0de8b311e19f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-bhft8_default(1260a1b8-082b-4e29-998b-0de8b311e19f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-bhft8" podUID=1260a1b8-082b-4e29-998b-0de8b311e19f Feb 9 13:19:36.129267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543-shm.mount: Deactivated successfully. Feb 9 13:19:36.354911 env[1471]: time="2024-02-09T13:19:36.354696127Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:19:36.405856 env[1471]: time="2024-02-09T13:19:36.405763195Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:36.406076 kubelet[1884]: E0209 13:19:36.406023 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:19:36.406076 kubelet[1884]: E0209 13:19:36.406064 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:19:36.406225 kubelet[1884]: E0209 13:19:36.406112 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:36.406225 kubelet[1884]: E0209 13:19:36.406146 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:19:36.711130 kubelet[1884]: E0209 13:19:36.710901 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:36.844345 kubelet[1884]: I0209 13:19:36.844253 1884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:19:36.845293 env[1471]: time="2024-02-09T13:19:36.845181523Z" level=info msg="StopPodSandbox for \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\"" Feb 9 13:19:36.874364 env[1471]: time="2024-02-09T13:19:36.874304765Z" level=error msg="StopPodSandbox for \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\" failed" error="failed to destroy network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:36.874491 kubelet[1884]: E0209 13:19:36.874482 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:19:36.874523 kubelet[1884]: E0209 13:19:36.874507 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543} Feb 9 13:19:36.874544 kubelet[1884]: E0209 13:19:36.874529 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1260a1b8-082b-4e29-998b-0de8b311e19f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:36.874616 kubelet[1884]: E0209 13:19:36.874549 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1260a1b8-082b-4e29-998b-0de8b311e19f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-bhft8" podUID=1260a1b8-082b-4e29-998b-0de8b311e19f Feb 9 13:19:37.343310 env[1471]: time="2024-02-09T13:19:37.343182996Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:19:37.372410 env[1471]: time="2024-02-09T13:19:37.372376111Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:37.372671 kubelet[1884]: E0209 13:19:37.372555 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:19:37.372671 kubelet[1884]: E0209 13:19:37.372610 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:19:37.372671 kubelet[1884]: E0209 13:19:37.372648 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:37.372671 kubelet[1884]: E0209 13:19:37.372665 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:19:37.574171 kubelet[1884]: E0209 13:19:37.574067 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:37.711579 kubelet[1884]: E0209 13:19:37.711334 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:38.712676 kubelet[1884]: E0209 13:19:38.712572 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:39.713516 kubelet[1884]: E0209 13:19:39.713411 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:40.714708 kubelet[1884]: E0209 13:19:40.714594 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:41.715135 kubelet[1884]: E0209 13:19:41.715017 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:42.715377 kubelet[1884]: E0209 13:19:42.715256 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:43.716608 kubelet[1884]: E0209 13:19:43.716451 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:44.716886 kubelet[1884]: E0209 13:19:44.716777 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:45.718070 kubelet[1884]: E0209 13:19:45.717967 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:46.343405 env[1471]: time="2024-02-09T13:19:46.343298028Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:19:46.370155 env[1471]: time="2024-02-09T13:19:46.370120985Z" level=error msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" failed" error="failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:46.370328 kubelet[1884]: E0209 13:19:46.370289 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:19:46.370328 kubelet[1884]: E0209 13:19:46.370313 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e} Feb 9 13:19:46.370384 kubelet[1884]: E0209 13:19:46.370337 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:46.370384 kubelet[1884]: E0209 13:19:46.370354 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7e5efc5-201d-49b9-967f-26a58631682a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68c77fd6bd-t8ckd" podUID=d7e5efc5-201d-49b9-967f-26a58631682a Feb 9 13:19:46.719164 kubelet[1884]: E0209 13:19:46.718931 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:47.719941 kubelet[1884]: E0209 13:19:47.719836 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:48.343254 env[1471]: time="2024-02-09T13:19:48.343134974Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:19:48.344355 env[1471]: time="2024-02-09T13:19:48.343918294Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:19:48.344355 env[1471]: time="2024-02-09T13:19:48.343971980Z" level=info msg="StopPodSandbox for \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\"" Feb 9 13:19:48.370392 env[1471]: time="2024-02-09T13:19:48.370327824Z" level=error msg="StopPodSandbox for \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\" failed" error="failed to destroy network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:48.370392 env[1471]: time="2024-02-09T13:19:48.370367503Z" level=error msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" failed" error="failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:48.370532 env[1471]: time="2024-02-09T13:19:48.370498781Z" level=error msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" failed" error="failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:48.370579 kubelet[1884]: E0209 13:19:48.370479 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:19:48.370579 kubelet[1884]: E0209 13:19:48.370509 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543} Feb 9 13:19:48.370579 kubelet[1884]: E0209 13:19:48.370531 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1260a1b8-082b-4e29-998b-0de8b311e19f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:48.370579 kubelet[1884]: E0209 13:19:48.370481 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:19:48.370730 kubelet[1884]: E0209 13:19:48.370575 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1260a1b8-082b-4e29-998b-0de8b311e19f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-bhft8" podUID=1260a1b8-082b-4e29-998b-0de8b311e19f Feb 9 13:19:48.370730 kubelet[1884]: E0209 13:19:48.370584 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d} Feb 9 13:19:48.370730 kubelet[1884]: E0209 13:19:48.370584 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:19:48.370730 kubelet[1884]: E0209 13:19:48.370599 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce} Feb 9 13:19:48.370730 kubelet[1884]: E0209 13:19:48.370612 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:48.370844 kubelet[1884]: E0209 13:19:48.370615 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:48.370844 kubelet[1884]: E0209 13:19:48.370642 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40a62a42-1c08-4513-9fc4-544d64d73811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-q5v9s" podUID=40a62a42-1c08-4513-9fc4-544d64d73811 Feb 9 13:19:48.370844 kubelet[1884]: E0209 13:19:48.370651 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f51fc53-a7af-4e05-9116-86df85873e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-72bhh" podUID=4f51fc53-a7af-4e05-9116-86df85873e6c Feb 9 13:19:48.720325 kubelet[1884]: E0209 13:19:48.720095 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:49.343194 env[1471]: time="2024-02-09T13:19:49.343072268Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:19:49.368986 env[1471]: time="2024-02-09T13:19:49.368924978Z" level=error msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" failed" error="failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 13:19:49.369228 kubelet[1884]: E0209 13:19:49.369072 1884 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:19:49.369228 kubelet[1884]: E0209 13:19:49.369125 1884 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a} Feb 9 13:19:49.369228 kubelet[1884]: E0209 13:19:49.369145 1884 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 13:19:49.369228 kubelet[1884]: E0209 13:19:49.369162 1884 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e7f3db-c090-45e4-97c1-38a20de9b400\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fz782" podUID=c4e7f3db-c090-45e4-97c1-38a20de9b400 Feb 9 13:19:49.720760 kubelet[1884]: E0209 13:19:49.720521 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:50.720805 kubelet[1884]: E0209 13:19:50.720690 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:51.720979 kubelet[1884]: E0209 13:19:51.720934 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:52.721694 kubelet[1884]: E0209 13:19:52.721643 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:52.951192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142322889.mount: Deactivated successfully. Feb 9 13:19:52.973894 env[1471]: time="2024-02-09T13:19:52.973811542Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:19:52.974523 env[1471]: time="2024-02-09T13:19:52.974484364Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:19:52.975095 env[1471]: time="2024-02-09T13:19:52.975053335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:19:52.976026 env[1471]: time="2024-02-09T13:19:52.975987605Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:19:52.976166 env[1471]: time="2024-02-09T13:19:52.976123657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 13:19:52.979988 env[1471]: time="2024-02-09T13:19:52.979949804Z" level=info msg="CreateContainer within sandbox \"c3e5489030d3c99d7ebaec5c2f7a18712893215f6b1170900b29442b6ef33684\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 13:19:52.986033 env[1471]: time="2024-02-09T13:19:52.985987186Z" level=info msg="CreateContainer within sandbox \"c3e5489030d3c99d7ebaec5c2f7a18712893215f6b1170900b29442b6ef33684\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc7e3aadb932ac8c6e1e21f9e6b6315edfa31dec88d01d67313719c43f327b8d\"" Feb 9 13:19:52.986281 env[1471]: time="2024-02-09T13:19:52.986232841Z" level=info msg="StartContainer for \"bc7e3aadb932ac8c6e1e21f9e6b6315edfa31dec88d01d67313719c43f327b8d\"" Feb 9 13:19:53.007261 systemd[1]: Started cri-containerd-bc7e3aadb932ac8c6e1e21f9e6b6315edfa31dec88d01d67313719c43f327b8d.scope. Feb 9 13:19:53.015000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.044131 kernel: kauditd_printk_skb: 34 callbacks suppressed Feb 9 13:19:53.044170 kernel: audit: type=1400 audit(1707484793.015:666): avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.015000 audit[4257]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2015 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:53.204922 kernel: audit: type=1300 audit(1707484793.015:666): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=2015 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:53.205005 kernel: audit: type=1327 audit(1707484793.015:666): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376533616164623933326163386336653165323166396536623633 Feb 9 13:19:53.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376533616164623933326163386336653165323166396536623633 Feb 9 13:19:53.298720 kernel: audit: type=1400 audit(1707484793.016:667): avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.362406 kernel: audit: type=1400 audit(1707484793.016:667): avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.426193 kernel: audit: type=1400 audit(1707484793.016:667): avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.490030 kernel: audit: type=1400 audit(1707484793.016:667): avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.555341 kernel: audit: type=1400 audit(1707484793.016:667): avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.569881 env[1471]: time="2024-02-09T13:19:53.569859698Z" level=info msg="StartContainer for \"bc7e3aadb932ac8c6e1e21f9e6b6315edfa31dec88d01d67313719c43f327b8d\" returns successfully" Feb 9 13:19:53.620593 kernel: audit: type=1400 audit(1707484793.016:667): avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.684610 kernel: audit: type=1400 audit(1707484793.016:667): avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.722085 kubelet[1884]: E0209 13:19:53.722037 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.016000 audit: BPF prog-id=78 op=LOAD Feb 9 13:19:53.016000 audit[4257]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0002efc70 items=0 ppid=2015 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:53.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376533616164623933326163386336653165323166396536623633 Feb 9 13:19:53.106000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.106000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.106000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.106000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.106000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.106000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.106000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.106000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.106000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.106000 audit: BPF prog-id=79 op=LOAD Feb 9 13:19:53.106000 audit[4257]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0002efcb8 items=0 ppid=2015 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:53.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376533616164623933326163386336653165323166396536623633 Feb 9 13:19:53.298000 audit: BPF prog-id=79 op=UNLOAD Feb 9 13:19:53.298000 audit: BPF prog-id=78 op=UNLOAD Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { perfmon } for pid=4257 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit[4257]: AVC avc: denied { bpf } for pid=4257 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:53.298000 audit: BPF prog-id=80 op=LOAD Feb 9 13:19:53.298000 audit[4257]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0002efd48 items=0 ppid=2015 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:53.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263376533616164623933326163386336653165323166396536623633 Feb 9 13:19:53.831794 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 13:19:53.831828 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 13:19:53.922612 kubelet[1884]: I0209 13:19:53.922537 1884 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-z64hk" podStartSLOduration=-9.22337183293231e+09 pod.CreationTimestamp="2024-02-09 13:16:30 +0000 UTC" firstStartedPulling="2024-02-09 13:16:37.911537604 +0000 UTC m=+20.571629364" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 13:19:53.921628282 +0000 UTC m=+216.581720111" watchObservedRunningTime="2024-02-09 13:19:53.92246523 +0000 UTC m=+216.582557029" Feb 9 13:19:54.722979 kubelet[1884]: E0209 13:19:54.722865 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:55.244000 audit[4426]: AVC avc: denied { write } for pid=4426 comm="tee" name="fd" dev="proc" ino=12967 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 13:19:55.244000 audit[4424]: AVC avc: denied { write } for pid=4424 comm="tee" name="fd" dev="proc" ino=23917 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 13:19:55.244000 audit[4428]: AVC avc: denied { write } for pid=4428 comm="tee" name="fd" dev="proc" ino=33955 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 13:19:55.244000 audit[4424]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca916e97f a2=241 a3=1b6 items=1 ppid=4392 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.244000 audit[4426]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd374fd96f a2=241 a3=1b6 items=1 ppid=4391 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.244000 audit[4428]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc2465e981 a2=241 a3=1b6 items=1 ppid=4394 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.244000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 13:19:55.244000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 13:19:55.244000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 13:19:55.244000 audit: PATH item=0 name="/dev/fd/63" inode=33952 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:19:55.244000 audit: PATH item=0 name="/dev/fd/63" inode=32968 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:19:55.244000 audit: PATH item=0 name="/dev/fd/63" inode=16323 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:19:55.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 13:19:55.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 13:19:55.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 13:19:55.244000 audit[4434]: AVC avc: denied { write } for pid=4434 comm="tee" name="fd" dev="proc" ino=32074 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 13:19:55.244000 audit[4434]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffa5fdf980 a2=241 a3=1b6 items=1 ppid=4397 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.244000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 13:19:55.244000 audit: PATH item=0 name="/dev/fd/63" inode=32071 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:19:55.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 13:19:55.244000 audit[4436]: AVC avc: denied { write } for pid=4436 comm="tee" name="fd" dev="proc" ino=8892 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 13:19:55.244000 audit[4438]: AVC avc: denied { write } for pid=4438 comm="tee" name="fd" dev="proc" ino=32971 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 13:19:55.244000 audit[4436]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff5e79597f a2=241 a3=1b6 items=1 ppid=4396 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.244000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 13:19:55.244000 audit: PATH item=0 name="/dev/fd/63" inode=8889 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:19:55.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 13:19:55.244000 audit[4438]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffd3d80970 a2=241 a3=1b6 items=1 ppid=4393 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.244000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 13:19:55.244000 audit: PATH item=0 name="/dev/fd/63" inode=20976 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:19:55.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 13:19:55.244000 audit[4435]: AVC avc: denied { write } for pid=4435 comm="tee" name="fd" dev="proc" ino=16326 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 13:19:55.244000 audit[4435]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd63d6c97f a2=241 a3=1b6 items=1 ppid=4395 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.244000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 13:19:55.244000 audit: PATH item=0 name="/dev/fd/63" inode=18094 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 13:19:55.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 13:19:55.323558 kernel: Initializing XFRM netlink socket Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit: BPF prog-id=81 op=LOAD Feb 9 13:19:55.369000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdbeefb7d0 a2=70 a3=7f6a1f3b2000 items=0 ppid=4404 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.369000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 13:19:55.369000 audit: BPF prog-id=81 op=UNLOAD Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.369000 audit: BPF prog-id=82 op=LOAD Feb 9 13:19:55.369000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdbeefb7d0 a2=70 a3=6e items=0 ppid=4404 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.369000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 13:19:55.370000 audit: BPF prog-id=82 op=UNLOAD Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffdbeefb780 a2=70 a3=7ffdbeefb7d0 items=0 ppid=4404 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit: BPF prog-id=83 op=LOAD Feb 9 13:19:55.370000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdbeefb760 a2=70 a3=7ffdbeefb7d0 items=0 ppid=4404 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 13:19:55.370000 audit: BPF prog-id=83 op=UNLOAD Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdbeefb840 a2=70 a3=0 items=0 ppid=4404 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdbeefb830 a2=70 a3=0 items=0 ppid=4404 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffdbeefb870 a2=70 a3=0 items=0 ppid=4404 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { perfmon } for pid=4572 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit[4572]: AVC avc: denied { bpf } for pid=4572 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.370000 audit: BPF prog-id=84 op=LOAD Feb 9 13:19:55.370000 audit[4572]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdbeefb790 a2=70 a3=ffffffff items=0 ppid=4404 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.370000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 13:19:55.371000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.371000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff3403f120 a2=70 a3=fff80800 items=0 ppid=4404 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 13:19:55.371000 audit[4576]: AVC avc: denied { bpf } for pid=4576 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:55.371000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff3403eff0 a2=70 a3=3 items=0 ppid=4404 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 13:19:55.386000 audit: BPF prog-id=84 op=UNLOAD Feb 9 13:19:55.410000 audit[4630]: NETFILTER_CFG table=mangle:79 family=2 entries=19 op=nft_register_chain pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:19:55.410000 audit[4630]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffcb4437350 a2=0 a3=7ffcb443733c items=0 ppid=4404 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.410000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:19:55.411000 audit[4629]: NETFILTER_CFG table=raw:80 family=2 entries=19 op=nft_register_chain pid=4629 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:19:55.411000 audit[4629]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffebc302c10 a2=0 a3=5577f9056000 items=0 ppid=4404 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.411000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:19:55.413000 audit[4631]: NETFILTER_CFG table=nat:81 family=2 entries=16 op=nft_register_chain pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:19:55.413000 audit[4631]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffdad36afe0 a2=0 a3=562097ed1000 items=0 ppid=4404 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.413000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:19:55.413000 audit[4633]: NETFILTER_CFG table=filter:82 family=2 entries=39 op=nft_register_chain pid=4633 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:19:55.413000 audit[4633]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7fff2d020470 a2=0 a3=55a11629e000 items=0 ppid=4404 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:55.413000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:19:55.723929 kubelet[1884]: E0209 13:19:55.723852 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:56.343852 systemd-networkd[1320]: vxlan.calico: Link UP Feb 9 13:19:56.343856 systemd-networkd[1320]: vxlan.calico: Gained carrier Feb 9 13:19:56.724385 kubelet[1884]: E0209 13:19:56.724198 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:57.574013 kubelet[1884]: E0209 13:19:57.573908 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:57.722871 systemd-networkd[1320]: vxlan.calico: Gained IPv6LL Feb 9 13:19:57.724641 kubelet[1884]: E0209 13:19:57.724570 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:58.725213 kubelet[1884]: E0209 13:19:58.725100 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:59.343157 env[1471]: time="2024-02-09T13:19:59.343001671Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:19:59.343157 env[1471]: time="2024-02-09T13:19:59.343121525Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.399 [INFO][4676] k8s.go 578: Cleaning up netns ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.399 [INFO][4676] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" iface="eth0" netns="/var/run/netns/cni-414b805a-ee66-509f-4545-7fb0d1e092ab" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.400 [INFO][4676] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" iface="eth0" netns="/var/run/netns/cni-414b805a-ee66-509f-4545-7fb0d1e092ab" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.400 [INFO][4676] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" iface="eth0" netns="/var/run/netns/cni-414b805a-ee66-509f-4545-7fb0d1e092ab" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.400 [INFO][4676] k8s.go 585: Releasing IP address(es) ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.400 [INFO][4676] utils.go 188: Calico CNI releasing IP address ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.439 [INFO][4711] ipam_plugin.go 415: Releasing address using handleID ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" HandleID="k8s-pod-network.862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.439 [INFO][4711] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.439 [INFO][4711] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.450 [WARNING][4711] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" HandleID="k8s-pod-network.862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.451 [INFO][4711] ipam_plugin.go 443: Releasing address using workloadID ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" HandleID="k8s-pod-network.862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.452 [INFO][4711] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:19:59.457282 env[1471]: 2024-02-09 13:19:59.455 [INFO][4676] k8s.go 591: Teardown processing complete. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:19:59.458979 env[1471]: time="2024-02-09T13:19:59.457582441Z" level=info msg="TearDown network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" successfully" Feb 9 13:19:59.458979 env[1471]: time="2024-02-09T13:19:59.457659957Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" returns successfully" Feb 9 13:19:59.459225 env[1471]: time="2024-02-09T13:19:59.458972725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c77fd6bd-t8ckd,Uid:d7e5efc5-201d-49b9-967f-26a58631682a,Namespace:calico-system,Attempt:1,}" Feb 9 13:19:59.462899 systemd[1]: run-netns-cni\x2d414b805a\x2dee66\x2d509f\x2d4545\x2d7fb0d1e092ab.mount: Deactivated successfully. Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.399 [INFO][4677] k8s.go 578: Cleaning up netns ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.400 [INFO][4677] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" iface="eth0" netns="/var/run/netns/cni-d1526aa4-181a-bc34-6b07-729a1c3d3473" Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.400 [INFO][4677] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" iface="eth0" netns="/var/run/netns/cni-d1526aa4-181a-bc34-6b07-729a1c3d3473" Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.401 [INFO][4677] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" iface="eth0" netns="/var/run/netns/cni-d1526aa4-181a-bc34-6b07-729a1c3d3473" Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.401 [INFO][4677] k8s.go 585: Releasing IP address(es) ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.401 [INFO][4677] utils.go 188: Calico CNI releasing IP address ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.439 [INFO][4712] ipam_plugin.go 415: Releasing address using handleID ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" HandleID="k8s-pod-network.e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.439 [INFO][4712] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.452 [INFO][4712] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.462 [WARNING][4712] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" HandleID="k8s-pod-network.e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.463 [INFO][4712] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" HandleID="k8s-pod-network.e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.466 [INFO][4712] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:19:59.476066 env[1471]: 2024-02-09 13:19:59.471 [INFO][4677] k8s.go 591: Teardown processing complete. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:19:59.477886 env[1471]: time="2024-02-09T13:19:59.476371635Z" level=info msg="TearDown network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" successfully" Feb 9 13:19:59.477886 env[1471]: time="2024-02-09T13:19:59.476456613Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" returns successfully" Feb 9 13:19:59.477886 env[1471]: time="2024-02-09T13:19:59.477755047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-72bhh,Uid:4f51fc53-a7af-4e05-9116-86df85873e6c,Namespace:calico-system,Attempt:1,}" Feb 9 13:19:59.484956 systemd[1]: run-netns-cni\x2dd1526aa4\x2d181a\x2dbc34\x2d6b07\x2d729a1c3d3473.mount: Deactivated successfully. Feb 9 13:19:59.583581 systemd-networkd[1320]: caliae999a9e2a4: Link UP Feb 9 13:19:59.645625 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 13:19:59.645678 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliae999a9e2a4: link becomes ready Feb 9 13:19:59.645680 systemd-networkd[1320]: caliae999a9e2a4: Gained carrier Feb 9 13:19:59.646230 systemd-networkd[1320]: calia53f9ae2838: Link UP Feb 9 13:19:59.646555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia53f9ae2838: link becomes ready Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.523 [INFO][4741] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0 calico-kube-controllers-68c77fd6bd- calico-system d7e5efc5-201d-49b9-967f-26a58631682a 1810 0 2024-02-09 13:10:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68c77fd6bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.67.80.7 calico-kube-controllers-68c77fd6bd-t8ckd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliae999a9e2a4 [] []}} ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Namespace="calico-system" Pod="calico-kube-controllers-68c77fd6bd-t8ckd" WorkloadEndpoint="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.523 [INFO][4741] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Namespace="calico-system" Pod="calico-kube-controllers-68c77fd6bd-t8ckd" WorkloadEndpoint="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.546 [INFO][4788] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" HandleID="k8s-pod-network.79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.559 [INFO][4788] ipam_plugin.go 268: Auto assigning IP ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" HandleID="k8s-pod-network.79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5830), Attrs:map[string]string{"namespace":"calico-system", "node":"10.67.80.7", "pod":"calico-kube-controllers-68c77fd6bd-t8ckd", "timestamp":"2024-02-09 13:19:59.546876233 +0000 UTC"}, Hostname:"10.67.80.7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.559 [INFO][4788] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.559 [INFO][4788] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.559 [INFO][4788] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.7' Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.560 [INFO][4788] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" host="10.67.80.7" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.564 [INFO][4788] ipam.go 372: Looking up existing affinities for host host="10.67.80.7" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.567 [INFO][4788] ipam.go 489: Trying affinity for 192.168.30.0/26 host="10.67.80.7" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.569 [INFO][4788] ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.570 [INFO][4788] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.570 [INFO][4788] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" host="10.67.80.7" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.572 [INFO][4788] ipam.go 1682: Creating new handle: k8s-pod-network.79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.575 [INFO][4788] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" host="10.67.80.7" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.578 [INFO][4788] ipam.go 1216: Successfully claimed IPs: [192.168.30.1/26] block=192.168.30.0/26 handle="k8s-pod-network.79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" host="10.67.80.7" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.578 [INFO][4788] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.1/26] handle="k8s-pod-network.79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" host="10.67.80.7" Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.578 [INFO][4788] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:19:59.661121 env[1471]: 2024-02-09 13:19:59.578 [INFO][4788] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.30.1/26] IPv6=[] ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" HandleID="k8s-pod-network.79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.661592 env[1471]: 2024-02-09 13:19:59.580 [INFO][4741] k8s.go 385: Populated endpoint ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Namespace="calico-system" Pod="calico-kube-controllers-68c77fd6bd-t8ckd" WorkloadEndpoint="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0", GenerateName:"calico-kube-controllers-68c77fd6bd-", Namespace:"calico-system", SelfLink:"", UID:"d7e5efc5-201d-49b9-967f-26a58631682a", ResourceVersion:"1810", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c77fd6bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"", Pod:"calico-kube-controllers-68c77fd6bd-t8ckd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae999a9e2a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:19:59.661592 env[1471]: 2024-02-09 13:19:59.580 [INFO][4741] k8s.go 386: Calico CNI using IPs: [192.168.30.1/32] ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Namespace="calico-system" Pod="calico-kube-controllers-68c77fd6bd-t8ckd" WorkloadEndpoint="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.661592 env[1471]: 2024-02-09 13:19:59.580 [INFO][4741] dataplane_linux.go 68: Setting the host side veth name to caliae999a9e2a4 ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Namespace="calico-system" Pod="calico-kube-controllers-68c77fd6bd-t8ckd" WorkloadEndpoint="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.661592 env[1471]: 2024-02-09 13:19:59.645 [INFO][4741] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Namespace="calico-system" Pod="calico-kube-controllers-68c77fd6bd-t8ckd" WorkloadEndpoint="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.661592 env[1471]: 2024-02-09 13:19:59.646 [INFO][4741] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Namespace="calico-system" Pod="calico-kube-controllers-68c77fd6bd-t8ckd" WorkloadEndpoint="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0", GenerateName:"calico-kube-controllers-68c77fd6bd-", Namespace:"calico-system", SelfLink:"", UID:"d7e5efc5-201d-49b9-967f-26a58631682a", ResourceVersion:"1810", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c77fd6bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b", Pod:"calico-kube-controllers-68c77fd6bd-t8ckd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae999a9e2a4", MAC:"fe:fd:29:08:77:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:19:59.661592 env[1471]: 2024-02-09 13:19:59.660 [INFO][4741] k8s.go 491: Wrote updated endpoint to datastore ContainerID="79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b" Namespace="calico-system" Pod="calico-kube-controllers-68c77fd6bd-t8ckd" WorkloadEndpoint="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:19:59.666938 env[1471]: time="2024-02-09T13:19:59.666904321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:19:59.666938 env[1471]: time="2024-02-09T13:19:59.666927871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:19:59.666938 env[1471]: time="2024-02-09T13:19:59.666937799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:19:59.667090 env[1471]: time="2024-02-09T13:19:59.667011874Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b pid=4847 runtime=io.containerd.runc.v2 Feb 9 13:19:59.668000 audit[4858]: NETFILTER_CFG table=filter:83 family=2 entries=36 op=nft_register_chain pid=4858 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:19:59.673993 kernel: kauditd_printk_skb: 151 callbacks suppressed Feb 9 13:19:59.674036 kernel: audit: type=1325 audit(1707484799.668:697): table=filter:83 family=2 entries=36 op=nft_register_chain pid=4858 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:19:59.674221 systemd-networkd[1320]: calia53f9ae2838: Gained carrier Feb 9 13:19:59.689553 systemd[1]: Started cri-containerd-79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b.scope. Feb 9 13:19:59.725548 kubelet[1884]: E0209 13:19:59.725504 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:19:59.668000 audit[4858]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffdab5028f0 a2=0 a3=7ffdab5028dc items=0 ppid=4404 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:59.756629 kernel: audit: type=1300 audit(1707484799.668:697): arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffdab5028f0 a2=0 a3=7ffdab5028dc items=0 ppid=4404 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:59.668000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:19:59.909386 kernel: audit: type=1327 audit(1707484799.668:697): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:19:59.909416 kernel: audit: type=1400 audit(1707484799.762:698): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.762000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970325 kernel: audit: type=1400 audit(1707484799.762:699): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.762000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.527 [INFO][4751] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.7-k8s-csi--node--driver--72bhh-eth0 csi-node-driver- calico-system 4f51fc53-a7af-4e05-9116-86df85873e6c 1809 0 2024-02-09 13:16:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.67.80.7 csi-node-driver-72bhh eth0 default [] [] [kns.calico-system ksa.calico-system.default] calia53f9ae2838 [] []}} ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Namespace="calico-system" Pod="csi-node-driver-72bhh" WorkloadEndpoint="10.67.80.7-k8s-csi--node--driver--72bhh-" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.527 [INFO][4751] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Namespace="calico-system" Pod="csi-node-driver-72bhh" WorkloadEndpoint="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.551 [INFO][4793] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" HandleID="k8s-pod-network.5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.560 [INFO][4793] ipam_plugin.go 268: Auto assigning IP ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" HandleID="k8s-pod-network.5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.67.80.7", "pod":"csi-node-driver-72bhh", "timestamp":"2024-02-09 13:19:59.551169561 +0000 UTC"}, Hostname:"10.67.80.7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.560 [INFO][4793] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.578 [INFO][4793] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.578 [INFO][4793] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.7' Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.580 [INFO][4793] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" host="10.67.80.7" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.585 [INFO][4793] ipam.go 372: Looking up existing affinities for host host="10.67.80.7" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.589 [INFO][4793] ipam.go 489: Trying affinity for 192.168.30.0/26 host="10.67.80.7" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.592 [INFO][4793] ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.595 [INFO][4793] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.595 [INFO][4793] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" host="10.67.80.7" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.596 [INFO][4793] ipam.go 1682: Creating new handle: k8s-pod-network.5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5 Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.600 [INFO][4793] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" host="10.67.80.7" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.605 [INFO][4793] ipam.go 1216: Successfully claimed IPs: [192.168.30.2/26] block=192.168.30.0/26 handle="k8s-pod-network.5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" host="10.67.80.7" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.605 [INFO][4793] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.2/26] handle="k8s-pod-network.5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" host="10.67.80.7" Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.605 [INFO][4793] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:19:59.984086 env[1471]: 2024-02-09 13:19:59.605 [INFO][4793] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.30.2/26] IPv6=[] ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" HandleID="k8s-pod-network.5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.984534 env[1471]: 2024-02-09 13:19:59.607 [INFO][4751] k8s.go 385: Populated endpoint ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Namespace="calico-system" Pod="csi-node-driver-72bhh" WorkloadEndpoint="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-csi--node--driver--72bhh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f51fc53-a7af-4e05-9116-86df85873e6c", ResourceVersion:"1809", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"", Pod:"csi-node-driver-72bhh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia53f9ae2838", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:19:59.984534 env[1471]: 2024-02-09 13:19:59.608 [INFO][4751] k8s.go 386: Calico CNI using IPs: [192.168.30.2/32] ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Namespace="calico-system" Pod="csi-node-driver-72bhh" WorkloadEndpoint="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.984534 env[1471]: 2024-02-09 13:19:59.608 [INFO][4751] dataplane_linux.go 68: Setting the host side veth name to calia53f9ae2838 ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Namespace="calico-system" Pod="csi-node-driver-72bhh" WorkloadEndpoint="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.984534 env[1471]: 2024-02-09 13:19:59.674 [INFO][4751] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Namespace="calico-system" Pod="csi-node-driver-72bhh" WorkloadEndpoint="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.984534 env[1471]: 2024-02-09 13:19:59.972 [INFO][4751] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Namespace="calico-system" Pod="csi-node-driver-72bhh" WorkloadEndpoint="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-csi--node--driver--72bhh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f51fc53-a7af-4e05-9116-86df85873e6c", ResourceVersion:"1809", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5", Pod:"csi-node-driver-72bhh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia53f9ae2838", MAC:"ae:e7:85:b4:e3:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:19:59.984534 env[1471]: 2024-02-09 13:19:59.982 [INFO][4751] k8s.go 491: Wrote updated endpoint to datastore ContainerID="5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5" Namespace="calico-system" Pod="csi-node-driver-72bhh" WorkloadEndpoint="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:19:59.989777 env[1471]: time="2024-02-09T13:19:59.989744088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:19:59.989777 env[1471]: time="2024-02-09T13:19:59.989763534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:19:59.989777 env[1471]: time="2024-02-09T13:19:59.989773345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:19:59.989896 env[1471]: time="2024-02-09T13:19:59.989837738Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5 pid=4897 runtime=io.containerd.runc.v2 Feb 9 13:20:00.006770 systemd[1]: Started cri-containerd-5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5.scope. Feb 9 13:19:59.762000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.092347 kernel: audit: type=1400 audit(1707484799.762:700): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.092401 kernel: audit: type=1400 audit(1707484799.762:701): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.762000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.153613 kernel: audit: type=1400 audit(1707484799.762:702): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.762000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214893 kernel: audit: type=1400 audit(1707484799.762:703): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214928 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Feb 9 13:19:59.762000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.762000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.762000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.762000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.969000 audit: BPF prog-id=85 op=LOAD Feb 9 13:19:59.969000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.969000 audit[4857]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=4847 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:59.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739613138663938656238373330663735353066373665363839613462 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=4847 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:59.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739613138663938656238373330663735353066373665363839613462 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.989000 audit[4898]: NETFILTER_CFG table=filter:84 family=2 entries=34 op=nft_register_chain pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:19:59.989000 audit[4898]: SYSCALL arch=c000003e syscall=46 success=yes exit=18320 a0=3 a1=7fff392e4630 a2=0 a3=7fff392e461c items=0 ppid=4404 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:59.989000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:20:00.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:19:59.970000 audit: BPF prog-id=86 op=LOAD Feb 9 13:19:59.970000 audit[4857]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00029aa90 items=0 ppid=4847 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:19:59.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739613138663938656238373330663735353066373665363839613462 Feb 9 13:20:00.091000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.091000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.091000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.091000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.091000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.091000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.091000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.091000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.091000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.091000 audit: BPF prog-id=87 op=LOAD Feb 9 13:20:00.214000 audit: BPF prog-id=88 op=LOAD Feb 9 13:20:00.091000 audit[4857]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0003962a8 items=0 ppid=4847 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739613138663938656238373330663735353066373665363839613462 Feb 9 13:20:00.214000 audit: BPF prog-id=87 op=UNLOAD Feb 9 13:20:00.214000 audit: BPF prog-id=86 op=UNLOAD Feb 9 13:20:00.214000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit[4857]: AVC avc: denied { perfmon } for pid=4857 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit: BPF prog-id=89 op=LOAD Feb 9 13:20:00.214000 audit[4907]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001179d8 a2=78 a3=c000307150 items=0 ppid=4897 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383338396530376636646233313264326334663664303661653932 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit: BPF prog-id=90 op=LOAD Feb 9 13:20:00.214000 audit[4857]: AVC avc: denied { bpf } for pid=4857 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.214000 audit: BPF prog-id=91 op=LOAD Feb 9 13:20:00.301000 audit[4907]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000117770 a2=78 a3=c000307198 items=0 ppid=4897 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383338396530376636646233313264326334663664303661653932 Feb 9 13:20:00.214000 audit[4857]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0003966b8 items=0 ppid=4847 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739613138663938656238373330663735353066373665363839613462 Feb 9 13:20:00.301000 audit: BPF prog-id=90 op=UNLOAD Feb 9 13:20:00.301000 audit: BPF prog-id=89 op=UNLOAD Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { perfmon } for pid=4907 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit[4907]: AVC avc: denied { bpf } for pid=4907 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.301000 audit: BPF prog-id=92 op=LOAD Feb 9 13:20:00.301000 audit[4907]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000117c30 a2=78 a3=c0003075a8 items=0 ppid=4897 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562383338396530376636646233313264326334663664303661653932 Feb 9 13:20:00.319288 env[1471]: time="2024-02-09T13:20:00.319262245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-72bhh,Uid:4f51fc53-a7af-4e05-9116-86df85873e6c,Namespace:calico-system,Attempt:1,} returns sandbox id \"5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5\"" Feb 9 13:20:00.319903 env[1471]: time="2024-02-09T13:20:00.319889582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 13:20:00.331098 env[1471]: time="2024-02-09T13:20:00.331081617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68c77fd6bd-t8ckd,Uid:d7e5efc5-201d-49b9-967f-26a58631682a,Namespace:calico-system,Attempt:1,} returns sandbox id \"79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b\"" Feb 9 13:20:00.342330 env[1471]: time="2024-02-09T13:20:00.342287391Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.384 [INFO][4954] k8s.go 578: Cleaning up netns ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.384 [INFO][4954] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" iface="eth0" netns="/var/run/netns/cni-75f396fd-eefc-d4fb-7313-9481b7d01ad1" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.384 [INFO][4954] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" iface="eth0" netns="/var/run/netns/cni-75f396fd-eefc-d4fb-7313-9481b7d01ad1" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.385 [INFO][4954] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" iface="eth0" netns="/var/run/netns/cni-75f396fd-eefc-d4fb-7313-9481b7d01ad1" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.385 [INFO][4954] k8s.go 585: Releasing IP address(es) ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.385 [INFO][4954] utils.go 188: Calico CNI releasing IP address ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.397 [INFO][4965] ipam_plugin.go 415: Releasing address using handleID ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" HandleID="k8s-pod-network.6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.397 [INFO][4965] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.397 [INFO][4965] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.410 [WARNING][4965] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" HandleID="k8s-pod-network.6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.410 [INFO][4965] ipam_plugin.go 443: Releasing address using workloadID ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" HandleID="k8s-pod-network.6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.413 [INFO][4965] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:00.418601 env[1471]: 2024-02-09 13:20:00.416 [INFO][4954] k8s.go 591: Teardown processing complete. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:00.420539 env[1471]: time="2024-02-09T13:20:00.418826912Z" level=info msg="TearDown network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" successfully" Feb 9 13:20:00.420539 env[1471]: time="2024-02-09T13:20:00.418898524Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" returns successfully" Feb 9 13:20:00.420539 env[1471]: time="2024-02-09T13:20:00.419954017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q5v9s,Uid:40a62a42-1c08-4513-9fc4-544d64d73811,Namespace:kube-system,Attempt:1,}" Feb 9 13:20:00.469939 systemd[1]: run-netns-cni\x2d75f396fd\x2deefc\x2dd4fb\x2d7313\x2d9481b7d01ad1.mount: Deactivated successfully. Feb 9 13:20:00.610272 systemd-networkd[1320]: cali8a4e189226b: Link UP Feb 9 13:20:00.674286 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 13:20:00.674323 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8a4e189226b: link becomes ready Feb 9 13:20:00.674464 systemd-networkd[1320]: cali8a4e189226b: Gained carrier Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.497 [INFO][4983] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0 coredns-787d4945fb- kube-system 40a62a42-1c08-4513-9fc4-544d64d73811 1821 0 2024-02-09 13:10:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.67.80.7 coredns-787d4945fb-q5v9s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8a4e189226b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Namespace="kube-system" Pod="coredns-787d4945fb-q5v9s" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.497 [INFO][4983] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Namespace="kube-system" Pod="coredns-787d4945fb-q5v9s" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.551 [INFO][5006] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" HandleID="k8s-pod-network.d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.560 [INFO][5006] ipam_plugin.go 268: Auto assigning IP ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" HandleID="k8s-pod-network.d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b8560), Attrs:map[string]string{"namespace":"kube-system", "node":"10.67.80.7", "pod":"coredns-787d4945fb-q5v9s", "timestamp":"2024-02-09 13:20:00.551537512 +0000 UTC"}, Hostname:"10.67.80.7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.560 [INFO][5006] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.561 [INFO][5006] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.561 [INFO][5006] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.7' Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.562 [INFO][5006] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" host="10.67.80.7" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.566 [INFO][5006] ipam.go 372: Looking up existing affinities for host host="10.67.80.7" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.574 [INFO][5006] ipam.go 489: Trying affinity for 192.168.30.0/26 host="10.67.80.7" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.578 [INFO][5006] ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.583 [INFO][5006] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.583 [INFO][5006] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" host="10.67.80.7" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.586 [INFO][5006] ipam.go 1682: Creating new handle: k8s-pod-network.d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22 Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.593 [INFO][5006] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" host="10.67.80.7" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.603 [INFO][5006] ipam.go 1216: Successfully claimed IPs: [192.168.30.3/26] block=192.168.30.0/26 handle="k8s-pod-network.d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" host="10.67.80.7" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.603 [INFO][5006] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.3/26] handle="k8s-pod-network.d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" host="10.67.80.7" Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.604 [INFO][5006] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:00.684576 env[1471]: 2024-02-09 13:20:00.604 [INFO][5006] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.30.3/26] IPv6=[] ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" HandleID="k8s-pod-network.d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.685353 env[1471]: 2024-02-09 13:20:00.607 [INFO][4983] k8s.go 385: Populated endpoint ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Namespace="kube-system" Pod="coredns-787d4945fb-q5v9s" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"40a62a42-1c08-4513-9fc4-544d64d73811", ResourceVersion:"1821", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"", Pod:"coredns-787d4945fb-q5v9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a4e189226b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:00.685353 env[1471]: 2024-02-09 13:20:00.607 [INFO][4983] k8s.go 386: Calico CNI using IPs: [192.168.30.3/32] ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Namespace="kube-system" Pod="coredns-787d4945fb-q5v9s" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.685353 env[1471]: 2024-02-09 13:20:00.607 [INFO][4983] dataplane_linux.go 68: Setting the host side veth name to cali8a4e189226b ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Namespace="kube-system" Pod="coredns-787d4945fb-q5v9s" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.685353 env[1471]: 2024-02-09 13:20:00.674 [INFO][4983] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Namespace="kube-system" Pod="coredns-787d4945fb-q5v9s" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.685353 env[1471]: 2024-02-09 13:20:00.674 [INFO][4983] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Namespace="kube-system" Pod="coredns-787d4945fb-q5v9s" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"40a62a42-1c08-4513-9fc4-544d64d73811", ResourceVersion:"1821", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22", Pod:"coredns-787d4945fb-q5v9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a4e189226b", MAC:"02:59:a6:46:eb:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:00.685353 env[1471]: 2024-02-09 13:20:00.683 [INFO][4983] k8s.go 491: Wrote updated endpoint to datastore ContainerID="d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22" Namespace="kube-system" Pod="coredns-787d4945fb-q5v9s" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:00.690630 env[1471]: time="2024-02-09T13:20:00.690561981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:20:00.690630 env[1471]: time="2024-02-09T13:20:00.690586492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:20:00.690630 env[1471]: time="2024-02-09T13:20:00.690596996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:20:00.690765 env[1471]: time="2024-02-09T13:20:00.690737309Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22 pid=5048 runtime=io.containerd.runc.v2 Feb 9 13:20:00.693000 audit[5065]: NETFILTER_CFG table=filter:85 family=2 entries=50 op=nft_register_chain pid=5065 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:20:00.693000 audit[5065]: SYSCALL arch=c000003e syscall=46 success=yes exit=25136 a0=3 a1=7fff0286a1d0 a2=0 a3=7fff0286a1bc items=0 ppid=4404 pid=5065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.693000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:20:00.697144 systemd[1]: Started cri-containerd-d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22.scope. Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.701000 audit: BPF prog-id=93 op=LOAD Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439386363383738623661616439383137363764623263363262633938 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439386363383738623661616439383137363764623263363262633938 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit: BPF prog-id=94 op=LOAD Feb 9 13:20:00.702000 audit[5059]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000298a90 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439386363383738623661616439383137363764623263363262633938 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit: BPF prog-id=95 op=LOAD Feb 9 13:20:00.702000 audit[5059]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000298ad8 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439386363383738623661616439383137363764623263363262633938 Feb 9 13:20:00.702000 audit: BPF prog-id=95 op=UNLOAD Feb 9 13:20:00.702000 audit: BPF prog-id=94 op=UNLOAD Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { perfmon } for pid=5059 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit[5059]: AVC avc: denied { bpf } for pid=5059 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:00.702000 audit: BPF prog-id=96 op=LOAD Feb 9 13:20:00.702000 audit[5059]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000298ee8 items=0 ppid=5048 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:00.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439386363383738623661616439383137363764623263363262633938 Feb 9 13:20:00.726076 kubelet[1884]: E0209 13:20:00.726000 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:00.732099 env[1471]: time="2024-02-09T13:20:00.732074874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-q5v9s,Uid:40a62a42-1c08-4513-9fc4-544d64d73811,Namespace:kube-system,Attempt:1,} returns sandbox id \"d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22\"" Feb 9 13:20:01.178871 systemd-networkd[1320]: caliae999a9e2a4: Gained IPv6LL Feb 9 13:20:01.343683 env[1471]: time="2024-02-09T13:20:01.343533060Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:20:01.343683 env[1471]: time="2024-02-09T13:20:01.343543751Z" level=info msg="StopPodSandbox for \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\"" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.389 [INFO][5115] k8s.go 578: Cleaning up netns ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.389 [INFO][5115] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" iface="eth0" netns="/var/run/netns/cni-5ea0c765-2bf4-ff82-510b-22ee4928c45b" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.389 [INFO][5115] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" iface="eth0" netns="/var/run/netns/cni-5ea0c765-2bf4-ff82-510b-22ee4928c45b" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.389 [INFO][5115] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" iface="eth0" netns="/var/run/netns/cni-5ea0c765-2bf4-ff82-510b-22ee4928c45b" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.389 [INFO][5115] k8s.go 585: Releasing IP address(es) ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.389 [INFO][5115] utils.go 188: Calico CNI releasing IP address ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.400 [INFO][5145] ipam_plugin.go 415: Releasing address using handleID ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" HandleID="k8s-pod-network.0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.400 [INFO][5145] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.400 [INFO][5145] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.412 [WARNING][5145] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" HandleID="k8s-pod-network.0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.412 [INFO][5145] ipam_plugin.go 443: Releasing address using workloadID ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" HandleID="k8s-pod-network.0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.415 [INFO][5145] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:01.416582 env[1471]: 2024-02-09 13:20:01.415 [INFO][5115] k8s.go 591: Teardown processing complete. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:01.417048 env[1471]: time="2024-02-09T13:20:01.416686746Z" level=info msg="TearDown network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" successfully" Feb 9 13:20:01.417048 env[1471]: time="2024-02-09T13:20:01.416716166Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" returns successfully" Feb 9 13:20:01.417165 env[1471]: time="2024-02-09T13:20:01.417148446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fz782,Uid:c4e7f3db-c090-45e4-97c1-38a20de9b400,Namespace:kube-system,Attempt:1,}" Feb 9 13:20:01.417976 systemd[1]: run-netns-cni\x2d5ea0c765\x2d2bf4\x2dff82\x2d510b\x2d22ee4928c45b.mount: Deactivated successfully. Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.389 [INFO][5116] k8s.go 578: Cleaning up netns ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.389 [INFO][5116] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" iface="eth0" netns="/var/run/netns/cni-bdb88da2-da53-dbfa-5bc5-9716d73a511c" Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.389 [INFO][5116] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" iface="eth0" netns="/var/run/netns/cni-bdb88da2-da53-dbfa-5bc5-9716d73a511c" Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.389 [INFO][5116] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" iface="eth0" netns="/var/run/netns/cni-bdb88da2-da53-dbfa-5bc5-9716d73a511c" Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.389 [INFO][5116] k8s.go 585: Releasing IP address(es) ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.389 [INFO][5116] utils.go 188: Calico CNI releasing IP address ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.400 [INFO][5146] ipam_plugin.go 415: Releasing address using handleID ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" HandleID="k8s-pod-network.9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.400 [INFO][5146] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.415 [INFO][5146] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.428 [WARNING][5146] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" HandleID="k8s-pod-network.9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.428 [INFO][5146] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" HandleID="k8s-pod-network.9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.431 [INFO][5146] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:01.432932 env[1471]: 2024-02-09 13:20:01.432 [INFO][5116] k8s.go 591: Teardown processing complete. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:01.433842 env[1471]: time="2024-02-09T13:20:01.432982647Z" level=info msg="TearDown network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\" successfully" Feb 9 13:20:01.433842 env[1471]: time="2024-02-09T13:20:01.433009380Z" level=info msg="StopPodSandbox for \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\" returns successfully" Feb 9 13:20:01.433842 env[1471]: time="2024-02-09T13:20:01.433442988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-bhft8,Uid:1260a1b8-082b-4e29-998b-0de8b311e19f,Namespace:default,Attempt:1,}" Feb 9 13:20:01.460606 systemd[1]: run-netns-cni\x2dbdb88da2\x2dda53\x2ddbfa\x2d5bc5\x2d9716d73a511c.mount: Deactivated successfully. Feb 9 13:20:01.542356 systemd-networkd[1320]: calib2d1e08289e: Link UP Feb 9 13:20:01.581926 systemd-networkd[1320]: calib2d1e08289e: Gained carrier Feb 9 13:20:01.582183 systemd-networkd[1320]: calia53f9ae2838: Gained IPv6LL Feb 9 13:20:01.582566 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib2d1e08289e: link becomes ready Feb 9 13:20:01.592053 systemd-networkd[1320]: cali159ae6d6030: Link UP Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.448 [INFO][5172] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0 coredns-787d4945fb- kube-system c4e7f3db-c090-45e4-97c1-38a20de9b400 1830 0 2024-02-09 13:10:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.67.80.7 coredns-787d4945fb-fz782 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2d1e08289e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Namespace="kube-system" Pod="coredns-787d4945fb-fz782" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--fz782-" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.448 [INFO][5172] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Namespace="kube-system" Pod="coredns-787d4945fb-fz782" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.469 [INFO][5215] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" HandleID="k8s-pod-network.297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.480 [INFO][5215] ipam_plugin.go 268: Auto assigning IP ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" HandleID="k8s-pod-network.297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5260), Attrs:map[string]string{"namespace":"kube-system", "node":"10.67.80.7", "pod":"coredns-787d4945fb-fz782", "timestamp":"2024-02-09 13:20:01.469810938 +0000 UTC"}, Hostname:"10.67.80.7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.480 [INFO][5215] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.480 [INFO][5215] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.480 [INFO][5215] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.7' Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.492 [INFO][5215] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" host="10.67.80.7" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.499 [INFO][5215] ipam.go 372: Looking up existing affinities for host host="10.67.80.7" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.506 [INFO][5215] ipam.go 489: Trying affinity for 192.168.30.0/26 host="10.67.80.7" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.510 [INFO][5215] ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.514 [INFO][5215] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.514 [INFO][5215] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" host="10.67.80.7" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.518 [INFO][5215] ipam.go 1682: Creating new handle: k8s-pod-network.297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515 Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.525 [INFO][5215] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" host="10.67.80.7" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.536 [INFO][5215] ipam.go 1216: Successfully claimed IPs: [192.168.30.4/26] block=192.168.30.0/26 handle="k8s-pod-network.297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" host="10.67.80.7" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.536 [INFO][5215] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.4/26] handle="k8s-pod-network.297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" host="10.67.80.7" Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.536 [INFO][5215] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:01.599511 env[1471]: 2024-02-09 13:20:01.536 [INFO][5215] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.30.4/26] IPv6=[] ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" HandleID="k8s-pod-network.297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.600520 env[1471]: 2024-02-09 13:20:01.539 [INFO][5172] k8s.go 385: Populated endpoint ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Namespace="kube-system" Pod="coredns-787d4945fb-fz782" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c4e7f3db-c090-45e4-97c1-38a20de9b400", ResourceVersion:"1830", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"", Pod:"coredns-787d4945fb-fz782", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2d1e08289e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:01.600520 env[1471]: 2024-02-09 13:20:01.539 [INFO][5172] k8s.go 386: Calico CNI using IPs: [192.168.30.4/32] ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Namespace="kube-system" Pod="coredns-787d4945fb-fz782" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.600520 env[1471]: 2024-02-09 13:20:01.539 [INFO][5172] dataplane_linux.go 68: Setting the host side veth name to calib2d1e08289e ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Namespace="kube-system" Pod="coredns-787d4945fb-fz782" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.600520 env[1471]: 2024-02-09 13:20:01.581 [INFO][5172] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Namespace="kube-system" Pod="coredns-787d4945fb-fz782" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.600520 env[1471]: 2024-02-09 13:20:01.582 [INFO][5172] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Namespace="kube-system" Pod="coredns-787d4945fb-fz782" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c4e7f3db-c090-45e4-97c1-38a20de9b400", ResourceVersion:"1830", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515", Pod:"coredns-787d4945fb-fz782", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2d1e08289e", MAC:"06:7a:6a:f1:a3:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:01.600520 env[1471]: 2024-02-09 13:20:01.598 [INFO][5172] k8s.go 491: Wrote updated endpoint to datastore ContainerID="297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515" Namespace="kube-system" Pod="coredns-787d4945fb-fz782" WorkloadEndpoint="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:01.608867 env[1471]: time="2024-02-09T13:20:01.608784603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:20:01.608867 env[1471]: time="2024-02-09T13:20:01.608820592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:20:01.608867 env[1471]: time="2024-02-09T13:20:01.608833443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:20:01.609080 env[1471]: time="2024-02-09T13:20:01.608964893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515 pid=5274 runtime=io.containerd.runc.v2 Feb 9 13:20:01.609000 audit[5279]: NETFILTER_CFG table=filter:86 family=2 entries=34 op=nft_register_chain pid=5279 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:20:01.609000 audit[5279]: SYSCALL arch=c000003e syscall=46 success=yes exit=17884 a0=3 a1=7ffeb8383bd0 a2=0 a3=7ffeb8383bbc items=0 ppid=4404 pid=5279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.609000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:20:01.618564 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali159ae6d6030: link becomes ready Feb 9 13:20:01.618626 systemd-networkd[1320]: cali159ae6d6030: Gained carrier Feb 9 13:20:01.635686 systemd[1]: Started cri-containerd-297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515.scope. Feb 9 13:20:01.641000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.641000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.641000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.641000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.641000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.641000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.641000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.641000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.641000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit: BPF prog-id=97 op=LOAD Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=5274 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373837316132346162396662303039323937393533373637363461 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=5274 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373837316132346162396662303039323937393533373637363461 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit: BPF prog-id=98 op=LOAD Feb 9 13:20:01.642000 audit[5285]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00009acd0 items=0 ppid=5274 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373837316132346162396662303039323937393533373637363461 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit: BPF prog-id=99 op=LOAD Feb 9 13:20:01.642000 audit[5285]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00009ad18 items=0 ppid=5274 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373837316132346162396662303039323937393533373637363461 Feb 9 13:20:01.642000 audit: BPF prog-id=99 op=UNLOAD Feb 9 13:20:01.642000 audit: BPF prog-id=98 op=UNLOAD Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { perfmon } for pid=5285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit[5285]: AVC avc: denied { bpf } for pid=5285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.642000 audit: BPF prog-id=100 op=LOAD Feb 9 13:20:01.642000 audit[5285]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00009b128 items=0 ppid=5274 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373837316132346162396662303039323937393533373637363461 Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.460 [INFO][5197] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0 nginx-deployment-8ffc5cf85- default 1260a1b8-082b-4e29-998b-0de8b311e19f 1829 0 2024-02-09 13:19:35 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.67.80.7 nginx-deployment-8ffc5cf85-bhft8 eth0 default [] [] [kns.default ksa.default.default] cali159ae6d6030 [] []}} ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bhft8" WorkloadEndpoint="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.460 [INFO][5197] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bhft8" WorkloadEndpoint="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.480 [INFO][5229] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" HandleID="k8s-pod-network.885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.499 [INFO][5229] ipam_plugin.go 268: Auto assigning IP ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" HandleID="k8s-pod-network.885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000719a40), Attrs:map[string]string{"namespace":"default", "node":"10.67.80.7", "pod":"nginx-deployment-8ffc5cf85-bhft8", "timestamp":"2024-02-09 13:20:01.480006491 +0000 UTC"}, Hostname:"10.67.80.7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.499 [INFO][5229] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.536 [INFO][5229] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.536 [INFO][5229] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.7' Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.540 [INFO][5229] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" host="10.67.80.7" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.549 [INFO][5229] ipam.go 372: Looking up existing affinities for host host="10.67.80.7" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.558 [INFO][5229] ipam.go 489: Trying affinity for 192.168.30.0/26 host="10.67.80.7" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.563 [INFO][5229] ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.568 [INFO][5229] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.568 [INFO][5229] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" host="10.67.80.7" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.571 [INFO][5229] ipam.go 1682: Creating new handle: k8s-pod-network.885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860 Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.578 [INFO][5229] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" host="10.67.80.7" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.589 [INFO][5229] ipam.go 1216: Successfully claimed IPs: [192.168.30.5/26] block=192.168.30.0/26 handle="k8s-pod-network.885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" host="10.67.80.7" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.589 [INFO][5229] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.5/26] handle="k8s-pod-network.885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" host="10.67.80.7" Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.589 [INFO][5229] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:01.648740 env[1471]: 2024-02-09 13:20:01.589 [INFO][5229] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.30.5/26] IPv6=[] ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" HandleID="k8s-pod-network.885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.649464 env[1471]: 2024-02-09 13:20:01.591 [INFO][5197] k8s.go 385: Populated endpoint ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bhft8" WorkloadEndpoint="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"1260a1b8-082b-4e29-998b-0de8b311e19f", ResourceVersion:"1829", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-bhft8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali159ae6d6030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:01.649464 env[1471]: 2024-02-09 13:20:01.591 [INFO][5197] k8s.go 386: Calico CNI using IPs: [192.168.30.5/32] ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bhft8" WorkloadEndpoint="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.649464 env[1471]: 2024-02-09 13:20:01.591 [INFO][5197] dataplane_linux.go 68: Setting the host side veth name to cali159ae6d6030 ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bhft8" WorkloadEndpoint="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.649464 env[1471]: 2024-02-09 13:20:01.618 [INFO][5197] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bhft8" WorkloadEndpoint="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.649464 env[1471]: 2024-02-09 13:20:01.637 [INFO][5197] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bhft8" WorkloadEndpoint="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"1260a1b8-082b-4e29-998b-0de8b311e19f", ResourceVersion:"1829", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860", Pod:"nginx-deployment-8ffc5cf85-bhft8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali159ae6d6030", MAC:"a6:f3:69:27:d4:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:01.649464 env[1471]: 2024-02-09 13:20:01.647 [INFO][5197] k8s.go 491: Wrote updated endpoint to datastore ContainerID="885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bhft8" WorkloadEndpoint="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:01.658000 audit[5318]: NETFILTER_CFG table=filter:87 family=2 entries=48 op=nft_register_chain pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:20:01.658000 audit[5318]: SYSCALL arch=c000003e syscall=46 success=yes exit=23424 a0=3 a1=7ffdf38991a0 a2=0 a3=7ffdf389918c items=0 ppid=4404 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.658000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:20:01.665086 env[1471]: time="2024-02-09T13:20:01.665029208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fz782,Uid:c4e7f3db-c090-45e4-97c1-38a20de9b400,Namespace:kube-system,Attempt:1,} returns sandbox id \"297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515\"" Feb 9 13:20:01.670071 env[1471]: time="2024-02-09T13:20:01.669980164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:20:01.670071 env[1471]: time="2024-02-09T13:20:01.670005653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:20:01.670071 env[1471]: time="2024-02-09T13:20:01.670013171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:20:01.670183 env[1471]: time="2024-02-09T13:20:01.670127992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860 pid=5331 runtime=io.containerd.runc.v2 Feb 9 13:20:01.677370 systemd[1]: Started cri-containerd-885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860.scope. Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.681000 audit: BPF prog-id=101 op=LOAD Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000149c48 a2=10 a3=1c items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838353734326665323032636663663530393663646464633630333037 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001496b0 a2=3c a3=c items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838353734326665323032636663663530393663646464633630333037 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit: BPF prog-id=102 op=LOAD Feb 9 13:20:01.682000 audit[5341]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001499d8 a2=78 a3=c000309470 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838353734326665323032636663663530393663646464633630333037 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit: BPF prog-id=103 op=LOAD Feb 9 13:20:01.682000 audit[5341]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000149770 a2=78 a3=c0003094b8 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838353734326665323032636663663530393663646464633630333037 Feb 9 13:20:01.682000 audit: BPF prog-id=103 op=UNLOAD Feb 9 13:20:01.682000 audit: BPF prog-id=102 op=UNLOAD Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { perfmon } for pid=5341 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit[5341]: AVC avc: denied { bpf } for pid=5341 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:01.682000 audit: BPF prog-id=104 op=LOAD Feb 9 13:20:01.682000 audit[5341]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000149c30 a2=78 a3=c0003098c8 items=0 ppid=5331 pid=5341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:01.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838353734326665323032636663663530393663646464633630333037 Feb 9 13:20:01.700248 env[1471]: time="2024-02-09T13:20:01.700221991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-bhft8,Uid:1260a1b8-082b-4e29-998b-0de8b311e19f,Namespace:default,Attempt:1,} returns sandbox id \"885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860\"" Feb 9 13:20:01.726821 kubelet[1884]: E0209 13:20:01.726781 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:02.714849 systemd-networkd[1320]: cali8a4e189226b: Gained IPv6LL Feb 9 13:20:02.727575 kubelet[1884]: E0209 13:20:02.727502 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:02.778858 systemd-networkd[1320]: calib2d1e08289e: Gained IPv6LL Feb 9 13:20:03.610869 systemd-networkd[1320]: cali159ae6d6030: Gained IPv6LL Feb 9 13:20:03.728714 kubelet[1884]: E0209 13:20:03.728634 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:04.729265 kubelet[1884]: E0209 13:20:04.729192 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:04.998482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696535070.mount: Deactivated successfully. Feb 9 13:20:05.730317 kubelet[1884]: E0209 13:20:05.730211 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:06.731078 kubelet[1884]: E0209 13:20:06.731003 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:07.430482 env[1471]: time="2024-02-09T13:20:07.430455687Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:20:07.430969 env[1471]: time="2024-02-09T13:20:07.430953947Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:20:07.431757 env[1471]: time="2024-02-09T13:20:07.431745028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:20:07.432518 env[1471]: time="2024-02-09T13:20:07.432507162Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 13:20:07.432937 env[1471]: time="2024-02-09T13:20:07.432923776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 13:20:07.433334 env[1471]: time="2024-02-09T13:20:07.433320929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 13:20:07.433921 env[1471]: time="2024-02-09T13:20:07.433907613Z" level=info msg="CreateContainer within sandbox \"5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 13:20:07.440526 env[1471]: time="2024-02-09T13:20:07.440476138Z" level=info msg="CreateContainer within sandbox \"5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5cecac1b79a444b6aa94ad0862bc3abd6e43b3f7344a634158cdb54b788d5a06\"" Feb 9 13:20:07.440810 env[1471]: time="2024-02-09T13:20:07.440769937Z" level=info msg="StartContainer for \"5cecac1b79a444b6aa94ad0862bc3abd6e43b3f7344a634158cdb54b788d5a06\"" Feb 9 13:20:07.462981 systemd[1]: Started cri-containerd-5cecac1b79a444b6aa94ad0862bc3abd6e43b3f7344a634158cdb54b788d5a06.scope. Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.494875 kernel: kauditd_printk_skb: 310 callbacks suppressed Feb 9 13:20:07.494971 kernel: audit: type=1400 audit(1707484807.469:790): avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=4897 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:07.641347 kernel: audit: type=1300 audit(1707484807.469:790): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=4897 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:07.641377 kernel: audit: type=1327 audit(1707484807.469:790): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563656361633162373961343434623661613934616430383632626333 Feb 9 13:20:07.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563656361633162373961343434623661613934616430383632626333 Feb 9 13:20:07.726251 kernel: audit: type=1400 audit(1707484807.469:791): avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.731410 kubelet[1884]: E0209 13:20:07.731374 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:07.784065 kernel: audit: type=1400 audit(1707484807.469:791): avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.841943 kernel: audit: type=1400 audit(1707484807.469:791): avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.899723 kernel: audit: type=1400 audit(1707484807.469:791): avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.958006 kernel: audit: type=1400 audit(1707484807.469:791): avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.976277 env[1471]: time="2024-02-09T13:20:07.976224520Z" level=info msg="StartContainer for \"5cecac1b79a444b6aa94ad0862bc3abd6e43b3f7344a634158cdb54b788d5a06\" returns successfully" Feb 9 13:20:08.016721 kernel: audit: type=1400 audit(1707484807.469:791): avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:08.076581 kernel: audit: type=1400 audit(1707484807.469:791): avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.469000 audit: BPF prog-id=105 op=LOAD Feb 9 13:20:07.469000 audit[5379]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001459d8 a2=78 a3=c0003be210 items=0 ppid=4897 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:07.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563656361633162373961343434623661613934616430383632626333 Feb 9 13:20:07.552000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.552000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.552000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.552000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.552000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.552000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.552000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.552000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.552000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.552000 audit: BPF prog-id=106 op=LOAD Feb 9 13:20:07.552000 audit[5379]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c0003be258 items=0 ppid=4897 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:07.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563656361633162373961343434623661613934616430383632626333 Feb 9 13:20:07.725000 audit: BPF prog-id=106 op=UNLOAD Feb 9 13:20:07.725000 audit: BPF prog-id=105 op=UNLOAD Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { perfmon } for pid=5379 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit[5379]: AVC avc: denied { bpf } for pid=5379 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:20:07.725000 audit: BPF prog-id=107 op=LOAD Feb 9 13:20:07.725000 audit[5379]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c0003be2e8 items=0 ppid=4897 pid=5379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:20:07.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563656361633162373961343434623661613934616430383632626333 Feb 9 13:20:08.732595 kubelet[1884]: E0209 13:20:08.732468 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:09.733401 kubelet[1884]: E0209 13:20:09.733292 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:10.409941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount598320593.mount: Deactivated successfully. Feb 9 13:20:10.734343 kubelet[1884]: E0209 13:20:10.734115 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:11.735159 kubelet[1884]: E0209 13:20:11.735093 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:12.736439 kubelet[1884]: E0209 13:20:12.736329 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:13.737527 kubelet[1884]: E0209 13:20:13.737421 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:14.738708 kubelet[1884]: E0209 13:20:14.738597 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:15.739777 kubelet[1884]: E0209 13:20:15.739658 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:16.740529 kubelet[1884]: E0209 13:20:16.740414 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:17.573994 kubelet[1884]: E0209 13:20:17.573880 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:17.589396 env[1471]: time="2024-02-09T13:20:17.589260107Z" level=info msg="StopPodSandbox for \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\"" Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.615 [WARNING][5458] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"1260a1b8-082b-4e29-998b-0de8b311e19f", ResourceVersion:"1838", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860", Pod:"nginx-deployment-8ffc5cf85-bhft8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali159ae6d6030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.615 [INFO][5458] k8s.go 578: Cleaning up netns ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.615 [INFO][5458] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" iface="eth0" netns="" Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.615 [INFO][5458] k8s.go 585: Releasing IP address(es) ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.615 [INFO][5458] utils.go 188: Calico CNI releasing IP address ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.625 [INFO][5471] ipam_plugin.go 415: Releasing address using handleID ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" HandleID="k8s-pod-network.9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.625 [INFO][5471] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.625 [INFO][5471] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.633 [WARNING][5471] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" HandleID="k8s-pod-network.9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.633 [INFO][5471] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" HandleID="k8s-pod-network.9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.635 [INFO][5471] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:17.636550 env[1471]: 2024-02-09 13:20:17.635 [INFO][5458] k8s.go 591: Teardown processing complete. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:17.636893 env[1471]: time="2024-02-09T13:20:17.636570313Z" level=info msg="TearDown network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\" successfully" Feb 9 13:20:17.636893 env[1471]: time="2024-02-09T13:20:17.636587875Z" level=info msg="StopPodSandbox for \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\" returns successfully" Feb 9 13:20:17.636893 env[1471]: time="2024-02-09T13:20:17.636879351Z" level=info msg="RemovePodSandbox for \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\"" Feb 9 13:20:17.636952 env[1471]: time="2024-02-09T13:20:17.636903455Z" level=info msg="Forcibly stopping sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\"" Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.654 [WARNING][5498] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"1260a1b8-082b-4e29-998b-0de8b311e19f", ResourceVersion:"1838", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 19, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"885742fe202cfcf5096cdddc60307fc800a8e6ded09a05130517bf2de9ae7860", Pod:"nginx-deployment-8ffc5cf85-bhft8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali159ae6d6030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.654 [INFO][5498] k8s.go 578: Cleaning up netns ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.654 [INFO][5498] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" iface="eth0" netns="" Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.654 [INFO][5498] k8s.go 585: Releasing IP address(es) ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.654 [INFO][5498] utils.go 188: Calico CNI releasing IP address ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.665 [INFO][5510] ipam_plugin.go 415: Releasing address using handleID ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" HandleID="k8s-pod-network.9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.665 [INFO][5510] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.665 [INFO][5510] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.670 [WARNING][5510] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" HandleID="k8s-pod-network.9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.670 [INFO][5510] ipam_plugin.go 443: Releasing address using workloadID ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" HandleID="k8s-pod-network.9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Workload="10.67.80.7-k8s-nginx--deployment--8ffc5cf85--bhft8-eth0" Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.672 [INFO][5510] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:17.673847 env[1471]: 2024-02-09 13:20:17.673 [INFO][5498] k8s.go 591: Teardown processing complete. ContainerID="9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543" Feb 9 13:20:17.673847 env[1471]: time="2024-02-09T13:20:17.673828955Z" level=info msg="TearDown network for sandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\" successfully" Feb 9 13:20:17.675261 env[1471]: time="2024-02-09T13:20:17.675214619Z" level=info msg="RemovePodSandbox \"9526e2d05d1f9b8dcf2a346d85b506de0d645d3be749975e951327b3a38b9543\" returns successfully" Feb 9 13:20:17.675597 env[1471]: time="2024-02-09T13:20:17.675580699Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:20:17.740799 kubelet[1884]: E0209 13:20:17.740756 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.715 [WARNING][5539] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0", GenerateName:"calico-kube-controllers-68c77fd6bd-", Namespace:"calico-system", SelfLink:"", UID:"d7e5efc5-201d-49b9-967f-26a58631682a", ResourceVersion:"1816", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c77fd6bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b", Pod:"calico-kube-controllers-68c77fd6bd-t8ckd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae999a9e2a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.715 [INFO][5539] k8s.go 578: Cleaning up netns ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.715 [INFO][5539] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" iface="eth0" netns="" Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.715 [INFO][5539] k8s.go 585: Releasing IP address(es) ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.715 [INFO][5539] utils.go 188: Calico CNI releasing IP address ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.741 [INFO][5557] ipam_plugin.go 415: Releasing address using handleID ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" HandleID="k8s-pod-network.862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.741 [INFO][5557] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.742 [INFO][5557] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.757 [WARNING][5557] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" HandleID="k8s-pod-network.862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.758 [INFO][5557] ipam_plugin.go 443: Releasing address using workloadID ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" HandleID="k8s-pod-network.862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.761 [INFO][5557] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:17.764391 env[1471]: 2024-02-09 13:20:17.762 [INFO][5539] k8s.go 591: Teardown processing complete. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:20:17.765302 env[1471]: time="2024-02-09T13:20:17.764428840Z" level=info msg="TearDown network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" successfully" Feb 9 13:20:17.765302 env[1471]: time="2024-02-09T13:20:17.764467202Z" level=info msg="StopPodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" returns successfully" Feb 9 13:20:17.765302 env[1471]: time="2024-02-09T13:20:17.764963488Z" level=info msg="RemovePodSandbox for \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:20:17.765302 env[1471]: time="2024-02-09T13:20:17.765009707Z" level=info msg="Forcibly stopping sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\"" Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.802 [WARNING][5587] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0", GenerateName:"calico-kube-controllers-68c77fd6bd-", Namespace:"calico-system", SelfLink:"", UID:"d7e5efc5-201d-49b9-967f-26a58631682a", ResourceVersion:"1816", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68c77fd6bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"79a18f98eb8730f7550f76e689a4b4ea1902eb8c4cfc3d00c78b73655f85587b", Pod:"calico-kube-controllers-68c77fd6bd-t8ckd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae999a9e2a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.802 [INFO][5587] k8s.go 578: Cleaning up netns ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.802 [INFO][5587] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" iface="eth0" netns="" Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.802 [INFO][5587] k8s.go 585: Releasing IP address(es) ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.802 [INFO][5587] utils.go 188: Calico CNI releasing IP address ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.818 [INFO][5601] ipam_plugin.go 415: Releasing address using handleID ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" HandleID="k8s-pod-network.862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.818 [INFO][5601] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.818 [INFO][5601] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.823 [WARNING][5601] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" HandleID="k8s-pod-network.862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.823 [INFO][5601] ipam_plugin.go 443: Releasing address using workloadID ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" HandleID="k8s-pod-network.862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Workload="10.67.80.7-k8s-calico--kube--controllers--68c77fd6bd--t8ckd-eth0" Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.824 [INFO][5601] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:17.826484 env[1471]: 2024-02-09 13:20:17.825 [INFO][5587] k8s.go 591: Teardown processing complete. ContainerID="862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e" Feb 9 13:20:17.826987 env[1471]: time="2024-02-09T13:20:17.826497595Z" level=info msg="TearDown network for sandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" successfully" Feb 9 13:20:17.827829 env[1471]: time="2024-02-09T13:20:17.827782428Z" level=info msg="RemovePodSandbox \"862191adad926a4c5ceb49f69a180f589798e8d29ec6ba5434cecba0fa65e24e\" returns successfully" Feb 9 13:20:17.828120 env[1471]: time="2024-02-09T13:20:17.828098140Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.855 [WARNING][5630] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c4e7f3db-c090-45e4-97c1-38a20de9b400", ResourceVersion:"1836", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515", Pod:"coredns-787d4945fb-fz782", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2d1e08289e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.855 [INFO][5630] k8s.go 578: Cleaning up netns ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.855 [INFO][5630] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" iface="eth0" netns="" Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.855 [INFO][5630] k8s.go 585: Releasing IP address(es) ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.856 [INFO][5630] utils.go 188: Calico CNI releasing IP address ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.874 [INFO][5644] ipam_plugin.go 415: Releasing address using handleID ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" HandleID="k8s-pod-network.0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.874 [INFO][5644] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.874 [INFO][5644] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.932 [WARNING][5644] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" HandleID="k8s-pod-network.0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.932 [INFO][5644] ipam_plugin.go 443: Releasing address using workloadID ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" HandleID="k8s-pod-network.0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.934 [INFO][5644] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:17.939683 env[1471]: 2024-02-09 13:20:17.937 [INFO][5630] k8s.go 591: Teardown processing complete. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:17.941426 env[1471]: time="2024-02-09T13:20:17.939726173Z" level=info msg="TearDown network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" successfully" Feb 9 13:20:17.941426 env[1471]: time="2024-02-09T13:20:17.939792834Z" level=info msg="StopPodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" returns successfully" Feb 9 13:20:17.941426 env[1471]: time="2024-02-09T13:20:17.940634251Z" level=info msg="RemovePodSandbox for \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:20:17.941426 env[1471]: time="2024-02-09T13:20:17.940722814Z" level=info msg="Forcibly stopping sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\"" Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:17.981 [WARNING][5672] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"c4e7f3db-c090-45e4-97c1-38a20de9b400", ResourceVersion:"1836", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"297871a24ab9fb00929795376764af6a461bffc5aef958fe32606a0815881515", Pod:"coredns-787d4945fb-fz782", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2d1e08289e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:17.982 [INFO][5672] k8s.go 578: Cleaning up netns ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:17.982 [INFO][5672] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" iface="eth0" netns="" Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:17.982 [INFO][5672] k8s.go 585: Releasing IP address(es) ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:17.982 [INFO][5672] utils.go 188: Calico CNI releasing IP address ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:17.993 [INFO][5684] ipam_plugin.go 415: Releasing address using handleID ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" HandleID="k8s-pod-network.0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:17.993 [INFO][5684] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:17.993 [INFO][5684] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:18.001 [WARNING][5684] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" HandleID="k8s-pod-network.0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:18.001 [INFO][5684] ipam_plugin.go 443: Releasing address using workloadID ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" HandleID="k8s-pod-network.0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Workload="10.67.80.7-k8s-coredns--787d4945fb--fz782-eth0" Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:18.003 [INFO][5684] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:18.004688 env[1471]: 2024-02-09 13:20:18.003 [INFO][5672] k8s.go 591: Teardown processing complete. ContainerID="0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a" Feb 9 13:20:18.005191 env[1471]: time="2024-02-09T13:20:18.004683551Z" level=info msg="TearDown network for sandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" successfully" Feb 9 13:20:18.006071 env[1471]: time="2024-02-09T13:20:18.006019948Z" level=info msg="RemovePodSandbox \"0cac64bb11a2b425d12aec82aeb15786596a785b175e75202f7b62bea5ecee2a\" returns successfully" Feb 9 13:20:18.006370 env[1471]: time="2024-02-09T13:20:18.006348214Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.035 [WARNING][5709] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-csi--node--driver--72bhh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f51fc53-a7af-4e05-9116-86df85873e6c", ResourceVersion:"1817", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5", Pod:"csi-node-driver-72bhh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia53f9ae2838", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.035 [INFO][5709] k8s.go 578: Cleaning up netns ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.035 [INFO][5709] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" iface="eth0" netns="" Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.035 [INFO][5709] k8s.go 585: Releasing IP address(es) ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.035 [INFO][5709] utils.go 188: Calico CNI releasing IP address ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.052 [INFO][5728] ipam_plugin.go 415: Releasing address using handleID ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" HandleID="k8s-pod-network.e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.052 [INFO][5728] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.052 [INFO][5728] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.057 [WARNING][5728] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" HandleID="k8s-pod-network.e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.057 [INFO][5728] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" HandleID="k8s-pod-network.e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.059 [INFO][5728] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:18.061430 env[1471]: 2024-02-09 13:20:18.060 [INFO][5709] k8s.go 591: Teardown processing complete. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:20:18.062088 env[1471]: time="2024-02-09T13:20:18.061424633Z" level=info msg="TearDown network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" successfully" Feb 9 13:20:18.062088 env[1471]: time="2024-02-09T13:20:18.061486798Z" level=info msg="StopPodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" returns successfully" Feb 9 13:20:18.062088 env[1471]: time="2024-02-09T13:20:18.061888653Z" level=info msg="RemovePodSandbox for \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:20:18.062088 env[1471]: time="2024-02-09T13:20:18.061927706Z" level=info msg="Forcibly stopping sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\"" Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.091 [WARNING][5756] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-csi--node--driver--72bhh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4f51fc53-a7af-4e05-9116-86df85873e6c", ResourceVersion:"1817", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 16, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"5b8389e07f6db312d2c4f6d06ae92a9b751b1b9a550c71bf9d8ba9fc996e8db5", Pod:"csi-node-driver-72bhh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia53f9ae2838", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.091 [INFO][5756] k8s.go 578: Cleaning up netns ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.091 [INFO][5756] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" iface="eth0" netns="" Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.091 [INFO][5756] k8s.go 585: Releasing IP address(es) ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.091 [INFO][5756] utils.go 188: Calico CNI releasing IP address ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.109 [INFO][5773] ipam_plugin.go 415: Releasing address using handleID ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" HandleID="k8s-pod-network.e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.109 [INFO][5773] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.109 [INFO][5773] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.115 [WARNING][5773] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" HandleID="k8s-pod-network.e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.116 [INFO][5773] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" HandleID="k8s-pod-network.e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Workload="10.67.80.7-k8s-csi--node--driver--72bhh-eth0" Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.118 [INFO][5773] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:18.124039 env[1471]: 2024-02-09 13:20:18.121 [INFO][5756] k8s.go 591: Teardown processing complete. ContainerID="e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d" Feb 9 13:20:18.124039 env[1471]: time="2024-02-09T13:20:18.123948360Z" level=info msg="TearDown network for sandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" successfully" Feb 9 13:20:18.127771 env[1471]: time="2024-02-09T13:20:18.127663553Z" level=info msg="RemovePodSandbox \"e3f391942a1f6dc9457c0f7698dc85d6dac457400544a8f2bb234766bfb2003d\" returns successfully" Feb 9 13:20:18.128659 env[1471]: time="2024-02-09T13:20:18.128537973Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.183 [WARNING][5805] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"40a62a42-1c08-4513-9fc4-544d64d73811", ResourceVersion:"1824", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22", Pod:"coredns-787d4945fb-q5v9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a4e189226b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.183 [INFO][5805] k8s.go 578: Cleaning up netns ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.183 [INFO][5805] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" iface="eth0" netns="" Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.183 [INFO][5805] k8s.go 585: Releasing IP address(es) ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.183 [INFO][5805] utils.go 188: Calico CNI releasing IP address ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.199 [INFO][5821] ipam_plugin.go 415: Releasing address using handleID ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" HandleID="k8s-pod-network.6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.199 [INFO][5821] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.199 [INFO][5821] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.204 [WARNING][5821] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" HandleID="k8s-pod-network.6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.204 [INFO][5821] ipam_plugin.go 443: Releasing address using workloadID ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" HandleID="k8s-pod-network.6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.205 [INFO][5821] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:18.207506 env[1471]: 2024-02-09 13:20:18.206 [INFO][5805] k8s.go 591: Teardown processing complete. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:18.207506 env[1471]: time="2024-02-09T13:20:18.207449226Z" level=info msg="TearDown network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" successfully" Feb 9 13:20:18.207506 env[1471]: time="2024-02-09T13:20:18.207475971Z" level=info msg="StopPodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" returns successfully" Feb 9 13:20:18.208150 env[1471]: time="2024-02-09T13:20:18.207805640Z" level=info msg="RemovePodSandbox for \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:20:18.208150 env[1471]: time="2024-02-09T13:20:18.207833667Z" level=info msg="Forcibly stopping sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\"" Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.234 [WARNING][5849] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"40a62a42-1c08-4513-9fc4-544d64d73811", ResourceVersion:"1824", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 10, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"d98cc878b6aad981767db2c62bc9872c404055b0fc99d4e1593eb932d3a4ea22", Pod:"coredns-787d4945fb-q5v9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a4e189226b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.234 [INFO][5849] k8s.go 578: Cleaning up netns ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.234 [INFO][5849] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" iface="eth0" netns="" Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.234 [INFO][5849] k8s.go 585: Releasing IP address(es) ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.234 [INFO][5849] utils.go 188: Calico CNI releasing IP address ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.252 [INFO][5865] ipam_plugin.go 415: Releasing address using handleID ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" HandleID="k8s-pod-network.6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.252 [INFO][5865] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.252 [INFO][5865] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.328 [WARNING][5865] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" HandleID="k8s-pod-network.6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.328 [INFO][5865] ipam_plugin.go 443: Releasing address using workloadID ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" HandleID="k8s-pod-network.6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Workload="10.67.80.7-k8s-coredns--787d4945fb--q5v9s-eth0" Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.330 [INFO][5865] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:20:18.336102 env[1471]: 2024-02-09 13:20:18.333 [INFO][5849] k8s.go 591: Teardown processing complete. ContainerID="6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce" Feb 9 13:20:18.337722 env[1471]: time="2024-02-09T13:20:18.336145815Z" level=info msg="TearDown network for sandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" successfully" Feb 9 13:20:18.339656 env[1471]: time="2024-02-09T13:20:18.339585798Z" level=info msg="RemovePodSandbox \"6a289f23a318d843c96031a345d64a9989da3e61951b3cfb1727fcc4765dc5ce\" returns successfully" Feb 9 13:20:18.741569 kubelet[1884]: E0209 13:20:18.741438 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:19.742357 kubelet[1884]: E0209 13:20:19.742246 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:20.743284 kubelet[1884]: E0209 13:20:20.743163 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:21.744103 kubelet[1884]: E0209 13:20:21.743907 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:22.744723 kubelet[1884]: E0209 13:20:22.744609 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:23.745695 kubelet[1884]: E0209 13:20:23.745578 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:24.746439 kubelet[1884]: E0209 13:20:24.746329 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:25.746814 kubelet[1884]: E0209 13:20:25.746706 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:26.747285 kubelet[1884]: E0209 13:20:26.747174 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:27.748294 kubelet[1884]: E0209 13:20:27.748171 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:28.748804 kubelet[1884]: E0209 13:20:28.748728 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:29.749833 kubelet[1884]: E0209 13:20:29.749726 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:30.750054 kubelet[1884]: E0209 13:20:30.749946 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:31.751118 kubelet[1884]: E0209 13:20:31.751010 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:32.752139 kubelet[1884]: E0209 13:20:32.752026 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:33.753118 kubelet[1884]: E0209 13:20:33.753007 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:34.753953 kubelet[1884]: E0209 13:20:34.753839 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:35.754443 kubelet[1884]: E0209 13:20:35.754323 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:36.754699 kubelet[1884]: E0209 13:20:36.754589 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:37.574459 kubelet[1884]: E0209 13:20:37.574354 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:37.754928 kubelet[1884]: E0209 13:20:37.754816 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:38.755839 kubelet[1884]: E0209 13:20:38.755725 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:39.756151 kubelet[1884]: E0209 13:20:39.756026 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:40.756793 kubelet[1884]: E0209 13:20:40.756682 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:41.757368 kubelet[1884]: E0209 13:20:41.757257 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:42.758428 kubelet[1884]: E0209 13:20:42.758307 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:43.759101 kubelet[1884]: E0209 13:20:43.758990 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:44.759988 kubelet[1884]: E0209 13:20:44.759884 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:45.760750 kubelet[1884]: E0209 13:20:45.760643 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:46.761344 kubelet[1884]: E0209 13:20:46.761228 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:47.762540 kubelet[1884]: E0209 13:20:47.762433 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:48.763307 kubelet[1884]: E0209 13:20:48.763189 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:49.764255 kubelet[1884]: E0209 13:20:49.764138 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:50.765142 kubelet[1884]: E0209 13:20:50.765034 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:51.766094 kubelet[1884]: E0209 13:20:51.765841 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:52.767101 kubelet[1884]: E0209 13:20:52.766989 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:53.767707 kubelet[1884]: E0209 13:20:53.767594 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:54.768277 kubelet[1884]: E0209 13:20:54.768166 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:55.768575 kubelet[1884]: E0209 13:20:55.768487 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:56.769335 kubelet[1884]: E0209 13:20:56.769229 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:57.573980 kubelet[1884]: E0209 13:20:57.573872 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:57.770008 kubelet[1884]: E0209 13:20:57.769885 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:58.770842 kubelet[1884]: E0209 13:20:58.770736 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:20:59.771581 kubelet[1884]: E0209 13:20:59.771448 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:00.772283 kubelet[1884]: E0209 13:21:00.772164 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:01.772727 kubelet[1884]: E0209 13:21:01.772615 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:02.773522 kubelet[1884]: E0209 13:21:02.773412 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:03.773831 kubelet[1884]: E0209 13:21:03.773714 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:04.774920 kubelet[1884]: E0209 13:21:04.774805 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:05.775028 kubelet[1884]: E0209 13:21:05.774957 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:06.776040 kubelet[1884]: E0209 13:21:06.775927 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:07.777126 kubelet[1884]: E0209 13:21:07.777050 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:08.778190 kubelet[1884]: E0209 13:21:08.778076 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:09.778337 kubelet[1884]: E0209 13:21:09.778231 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:10.779299 kubelet[1884]: E0209 13:21:10.779189 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:11.780314 kubelet[1884]: E0209 13:21:11.780209 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:12.780760 kubelet[1884]: E0209 13:21:12.780657 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:13.565886 systemd[1]: Started sshd@9-86.109.11.101:22-61.177.172.140:30675.service. Feb 9 13:21:13.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-86.109.11.101:22-61.177.172.140:30675 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:13.591861 kernel: kauditd_printk_skb: 33 callbacks suppressed Feb 9 13:21:13.591943 kernel: audit: type=1130 audit(1707484873.565:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-86.109.11.101:22-61.177.172.140:30675 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:13.781499 kubelet[1884]: E0209 13:21:13.781430 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:14.484895 sshd[5965]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 13:21:14.484000 audit[5965]: USER_AUTH pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:14.571721 kernel: audit: type=1100 audit(1707484874.484:797): pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:14.782059 kubelet[1884]: E0209 13:21:14.781822 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:15.782326 kubelet[1884]: E0209 13:21:15.782210 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:16.782984 kubelet[1884]: E0209 13:21:16.782876 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:16.983339 sshd[5965]: Failed password for root from 61.177.172.140 port 30675 ssh2 Feb 9 13:21:17.574476 kubelet[1884]: E0209 13:21:17.574382 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:17.690000 audit[5965]: USER_AUTH pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:17.777653 kernel: audit: type=1100 audit(1707484877.690:798): pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:17.783959 kubelet[1884]: E0209 13:21:17.783920 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:18.785151 kubelet[1884]: E0209 13:21:18.785036 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:19.785757 kubelet[1884]: E0209 13:21:19.785641 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:19.932963 sshd[5965]: Failed password for root from 61.177.172.140 port 30675 ssh2 Feb 9 13:21:20.787035 kubelet[1884]: E0209 13:21:20.786963 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:20.892000 audit[5965]: USER_AUTH pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:20.979589 kernel: audit: type=1100 audit(1707484880.892:799): pid=5965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:21.787536 kubelet[1884]: E0209 13:21:21.787427 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:22.788007 kubelet[1884]: E0209 13:21:22.787940 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:23.215271 sshd[5965]: Failed password for root from 61.177.172.140 port 30675 ssh2 Feb 9 13:21:23.788976 kubelet[1884]: E0209 13:21:23.788867 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:24.094667 sshd[5965]: Received disconnect from 61.177.172.140 port 30675:11: [preauth] Feb 9 13:21:24.094667 sshd[5965]: Disconnected from authenticating user root 61.177.172.140 port 30675 [preauth] Feb 9 13:21:24.095198 sshd[5965]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 13:21:24.097261 systemd[1]: sshd@9-86.109.11.101:22-61.177.172.140:30675.service: Deactivated successfully. Feb 9 13:21:24.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-86.109.11.101:22-61.177.172.140:30675 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:24.185739 kernel: audit: type=1131 audit(1707484884.097:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-86.109.11.101:22-61.177.172.140:30675 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:24.338803 systemd[1]: Started sshd@10-86.109.11.101:22-61.177.172.140:41812.service. Feb 9 13:21:24.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-86.109.11.101:22-61.177.172.140:41812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:24.426731 kernel: audit: type=1130 audit(1707484884.337:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-86.109.11.101:22-61.177.172.140:41812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:24.789407 kubelet[1884]: E0209 13:21:24.789303 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:25.305772 sshd[5983]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 13:21:25.304000 audit[5983]: USER_AUTH pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:25.392618 kernel: audit: type=1100 audit(1707484885.304:802): pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:25.789990 kubelet[1884]: E0209 13:21:25.789879 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:26.791111 kubelet[1884]: E0209 13:21:26.791005 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:27.648403 sshd[5983]: Failed password for root from 61.177.172.140 port 41812 ssh2 Feb 9 13:21:27.791989 kubelet[1884]: E0209 13:21:27.791867 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:28.515000 audit[5983]: ANOM_LOGIN_FAILURES pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='pam_faillock uid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:28.517426 sshd[5983]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 13:21:28.519227 systemd[1]: Started sshd@11-86.109.11.101:22-85.209.11.27:17236.service. Feb 9 13:21:28.515000 audit[5983]: USER_AUTH pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:28.672310 kernel: audit: type=2100 audit(1707484888.515:803): pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='pam_faillock uid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:28.672348 kernel: audit: type=1100 audit(1707484888.515:804): pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:28.672367 kernel: audit: type=1130 audit(1707484888.517:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-86.109.11.101:22-85.209.11.27:17236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:28.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-86.109.11.101:22-85.209.11.27:17236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:28.792076 kubelet[1884]: E0209 13:21:28.791979 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:29.792531 kubelet[1884]: E0209 13:21:29.792418 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:30.556030 sshd[5999]: Invalid user user from 85.209.11.27 port 17236 Feb 9 13:21:30.774590 sshd[5999]: pam_faillock(sshd:auth): User unknown Feb 9 13:21:30.775821 sshd[5999]: pam_unix(sshd:auth): check pass; user unknown Feb 9 13:21:30.775910 sshd[5999]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=85.209.11.27 Feb 9 13:21:30.776913 sshd[5999]: pam_faillock(sshd:auth): User unknown Feb 9 13:21:30.775000 audit[5999]: USER_AUTH pid=5999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="user" exe="/usr/sbin/sshd" hostname=85.209.11.27 addr=85.209.11.27 terminal=ssh res=failed' Feb 9 13:21:30.792998 kubelet[1884]: E0209 13:21:30.792955 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:30.869737 kernel: audit: type=1100 audit(1707484890.775:806): pid=5999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="user" exe="/usr/sbin/sshd" hostname=85.209.11.27 addr=85.209.11.27 terminal=ssh res=failed' Feb 9 13:21:31.271488 sshd[5983]: Failed password for root from 61.177.172.140 port 41812 ssh2 Feb 9 13:21:31.730000 audit[5983]: USER_AUTH pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:31.793263 kubelet[1884]: E0209 13:21:31.793218 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:31.825732 kernel: audit: type=1100 audit(1707484891.730:807): pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:32.793871 kubelet[1884]: E0209 13:21:32.793765 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:32.802777 sshd[5999]: Failed password for invalid user user from 85.209.11.27 port 17236 ssh2 Feb 9 13:21:33.563420 sshd[5983]: Failed password for root from 61.177.172.140 port 41812 ssh2 Feb 9 13:21:33.794109 kubelet[1884]: E0209 13:21:33.793993 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:34.781708 sshd[5999]: Connection closed by invalid user user 85.209.11.27 port 17236 [preauth] Feb 9 13:21:34.784204 systemd[1]: sshd@11-86.109.11.101:22-85.209.11.27:17236.service: Deactivated successfully. Feb 9 13:21:34.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-86.109.11.101:22-85.209.11.27:17236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:34.795007 kubelet[1884]: E0209 13:21:34.794968 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:34.877723 kernel: audit: type=1131 audit(1707484894.784:808): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-86.109.11.101:22-85.209.11.27:17236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:34.946452 sshd[5983]: Received disconnect from 61.177.172.140 port 41812:11: [preauth] Feb 9 13:21:34.946452 sshd[5983]: Disconnected from authenticating user root 61.177.172.140 port 41812 [preauth] Feb 9 13:21:34.946712 sshd[5983]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 13:21:34.947520 systemd[1]: sshd@10-86.109.11.101:22-61.177.172.140:41812.service: Deactivated successfully. Feb 9 13:21:34.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-86.109.11.101:22-61.177.172.140:41812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:35.039742 kernel: audit: type=1131 audit(1707484894.947:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-86.109.11.101:22-61.177.172.140:41812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:35.796176 kubelet[1884]: E0209 13:21:35.796069 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:36.797005 kubelet[1884]: E0209 13:21:36.796896 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:37.573772 kubelet[1884]: E0209 13:21:37.573707 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:37.797959 kubelet[1884]: E0209 13:21:37.797890 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:38.798618 kubelet[1884]: E0209 13:21:38.798506 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:39.799456 kubelet[1884]: E0209 13:21:39.799384 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:40.800617 kubelet[1884]: E0209 13:21:40.800530 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:40.830040 update_engine[1463]: I0209 13:21:40.829923 1463 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 13:21:40.830040 update_engine[1463]: I0209 13:21:40.830001 1463 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 13:21:40.831461 update_engine[1463]: I0209 13:21:40.831406 1463 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 13:21:40.832409 update_engine[1463]: I0209 13:21:40.832361 1463 omaha_request_params.cc:62] Current group set to lts Feb 9 13:21:40.832784 update_engine[1463]: I0209 13:21:40.832743 1463 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 13:21:40.832784 update_engine[1463]: I0209 13:21:40.832767 1463 update_attempter.cc:643] Scheduling an action processor start. Feb 9 13:21:40.833184 update_engine[1463]: I0209 13:21:40.832813 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 13:21:40.833184 update_engine[1463]: I0209 13:21:40.832901 1463 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 13:21:40.833184 update_engine[1463]: I0209 13:21:40.833117 1463 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 13:21:40.833184 update_engine[1463]: I0209 13:21:40.833144 1463 omaha_request_action.cc:271] Request: Feb 9 13:21:40.833184 update_engine[1463]: Feb 9 13:21:40.833184 update_engine[1463]: Feb 9 13:21:40.833184 update_engine[1463]: Feb 9 13:21:40.833184 update_engine[1463]: Feb 9 13:21:40.833184 update_engine[1463]: Feb 9 13:21:40.833184 update_engine[1463]: Feb 9 13:21:40.833184 update_engine[1463]: Feb 9 13:21:40.833184 update_engine[1463]: Feb 9 13:21:40.833184 update_engine[1463]: I0209 13:21:40.833160 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 13:21:40.835063 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 13:21:40.836664 update_engine[1463]: I0209 13:21:40.836655 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 13:21:40.836723 update_engine[1463]: E0209 13:21:40.836715 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 13:21:40.836765 update_engine[1463]: I0209 13:21:40.836753 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 13:21:41.801724 kubelet[1884]: E0209 13:21:41.801616 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:42.802856 kubelet[1884]: E0209 13:21:42.802787 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:43.803882 kubelet[1884]: E0209 13:21:43.803807 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:44.804587 kubelet[1884]: E0209 13:21:44.804463 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:45.104790 systemd[1]: Started sshd@12-86.109.11.101:22-61.177.172.140:61034.service. Feb 9 13:21:45.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-86.109.11.101:22-61.177.172.140:61034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:45.198746 kernel: audit: type=1130 audit(1707484905.103:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-86.109.11.101:22-61.177.172.140:61034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:45.805358 kubelet[1884]: E0209 13:21:45.805250 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:46.460318 sshd[6036]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 13:21:46.459000 audit[6036]: USER_AUTH pid=6036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:46.553739 kernel: audit: type=1100 audit(1707484906.459:811): pid=6036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:46.805728 kubelet[1884]: E0209 13:21:46.805519 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:47.806246 kubelet[1884]: E0209 13:21:47.806142 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:48.350743 sshd[6036]: Failed password for root from 61.177.172.140 port 61034 ssh2 Feb 9 13:21:48.807339 kubelet[1884]: E0209 13:21:48.807103 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:49.672000 audit[6036]: USER_AUTH pid=6036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:49.766734 kernel: audit: type=1100 audit(1707484909.672:812): pid=6036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:49.807541 kubelet[1884]: E0209 13:21:49.807471 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:50.808366 kubelet[1884]: E0209 13:21:50.808255 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:50.833773 update_engine[1463]: I0209 13:21:50.833685 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 13:21:50.834872 update_engine[1463]: I0209 13:21:50.834239 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 13:21:50.834872 update_engine[1463]: E0209 13:21:50.834499 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 13:21:50.834872 update_engine[1463]: I0209 13:21:50.834769 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 13:21:51.809191 kubelet[1884]: E0209 13:21:51.809005 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:51.975828 sshd[6036]: Failed password for root from 61.177.172.140 port 61034 ssh2 Feb 9 13:21:52.809993 kubelet[1884]: E0209 13:21:52.809876 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:52.885000 audit[6036]: USER_AUTH pid=6036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:52.979717 kernel: audit: type=1100 audit(1707484912.885:813): pid=6036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=61.177.172.140 addr=61.177.172.140 terminal=ssh res=failed' Feb 9 13:21:53.810858 kubelet[1884]: E0209 13:21:53.810740 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:54.811916 kubelet[1884]: E0209 13:21:54.811806 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:55.600491 sshd[6036]: Failed password for root from 61.177.172.140 port 61034 ssh2 Feb 9 13:21:55.812727 kubelet[1884]: E0209 13:21:55.812618 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:56.468025 sshd[6036]: Received disconnect from 61.177.172.140 port 61034:11: [preauth] Feb 9 13:21:56.468025 sshd[6036]: Disconnected from authenticating user root 61.177.172.140 port 61034 [preauth] Feb 9 13:21:56.468615 sshd[6036]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 13:21:56.470624 systemd[1]: sshd@12-86.109.11.101:22-61.177.172.140:61034.service: Deactivated successfully. Feb 9 13:21:56.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-86.109.11.101:22-61.177.172.140:61034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:56.564734 kernel: audit: type=1131 audit(1707484916.470:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-86.109.11.101:22-61.177.172.140:61034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 13:21:56.814022 kubelet[1884]: E0209 13:21:56.813787 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:57.574726 kubelet[1884]: E0209 13:21:57.574620 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:57.815103 kubelet[1884]: E0209 13:21:57.814991 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:58.815376 kubelet[1884]: E0209 13:21:58.815252 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:21:59.816116 kubelet[1884]: E0209 13:21:59.816008 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:00.816397 kubelet[1884]: E0209 13:22:00.816286 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:00.826742 update_engine[1463]: I0209 13:22:00.826622 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 13:22:00.827520 update_engine[1463]: I0209 13:22:00.827085 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 13:22:00.827520 update_engine[1463]: E0209 13:22:00.827283 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 13:22:00.827520 update_engine[1463]: I0209 13:22:00.827454 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 13:22:01.816499 kubelet[1884]: E0209 13:22:01.816423 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:02.816781 kubelet[1884]: E0209 13:22:02.816673 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:03.817459 kubelet[1884]: E0209 13:22:03.817337 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:04.818513 kubelet[1884]: E0209 13:22:04.818397 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:05.818990 kubelet[1884]: E0209 13:22:05.818879 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:06.819396 kubelet[1884]: E0209 13:22:06.819311 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:07.820510 kubelet[1884]: E0209 13:22:07.820400 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:08.820887 kubelet[1884]: E0209 13:22:08.820775 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:09.821164 kubelet[1884]: E0209 13:22:09.821052 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:10.821928 kubelet[1884]: E0209 13:22:10.821818 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:10.834735 update_engine[1463]: I0209 13:22:10.834618 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 13:22:10.835508 update_engine[1463]: I0209 13:22:10.835088 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 13:22:10.835508 update_engine[1463]: E0209 13:22:10.835292 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 13:22:10.835508 update_engine[1463]: I0209 13:22:10.835440 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 13:22:10.835508 update_engine[1463]: I0209 13:22:10.835456 1463 omaha_request_action.cc:621] Omaha request response: Feb 9 13:22:10.835929 update_engine[1463]: E0209 13:22:10.835631 1463 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 13:22:10.835929 update_engine[1463]: I0209 13:22:10.835661 1463 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 13:22:10.835929 update_engine[1463]: I0209 13:22:10.835671 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 13:22:10.835929 update_engine[1463]: I0209 13:22:10.835680 1463 update_attempter.cc:306] Processing Done. Feb 9 13:22:10.835929 update_engine[1463]: E0209 13:22:10.835706 1463 update_attempter.cc:619] Update failed. Feb 9 13:22:10.835929 update_engine[1463]: I0209 13:22:10.835716 1463 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 13:22:10.835929 update_engine[1463]: I0209 13:22:10.835724 1463 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 13:22:10.835929 update_engine[1463]: I0209 13:22:10.835733 1463 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 13:22:10.835929 update_engine[1463]: I0209 13:22:10.835882 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 13:22:10.835929 update_engine[1463]: I0209 13:22:10.835937 1463 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.835948 1463 omaha_request_action.cc:271] Request: Feb 9 13:22:10.836853 update_engine[1463]: Feb 9 13:22:10.836853 update_engine[1463]: Feb 9 13:22:10.836853 update_engine[1463]: Feb 9 13:22:10.836853 update_engine[1463]: Feb 9 13:22:10.836853 update_engine[1463]: Feb 9 13:22:10.836853 update_engine[1463]: Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.835958 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.836277 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 13:22:10.836853 update_engine[1463]: E0209 13:22:10.836442 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.836588 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.836602 1463 omaha_request_action.cc:621] Omaha request response: Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.836613 1463 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.836621 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.836629 1463 update_attempter.cc:306] Processing Done. Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.836637 1463 update_attempter.cc:310] Error event sent. Feb 9 13:22:10.836853 update_engine[1463]: I0209 13:22:10.836656 1463 update_check_scheduler.cc:74] Next update check in 41m8s Feb 9 13:22:10.838317 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 13:22:10.838317 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 13:22:11.822374 kubelet[1884]: E0209 13:22:11.822260 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:12.823581 kubelet[1884]: E0209 13:22:12.823496 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:13.823859 kubelet[1884]: E0209 13:22:13.823751 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:14.824795 kubelet[1884]: E0209 13:22:14.824732 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:15.825719 kubelet[1884]: E0209 13:22:15.825613 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:16.826271 kubelet[1884]: E0209 13:22:16.826159 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:17.573912 kubelet[1884]: E0209 13:22:17.573805 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:17.827597 kubelet[1884]: E0209 13:22:17.827387 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:18.828247 kubelet[1884]: E0209 13:22:18.828165 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:19.828505 kubelet[1884]: E0209 13:22:19.828423 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:20.829729 kubelet[1884]: E0209 13:22:20.829607 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:21.830322 kubelet[1884]: E0209 13:22:21.830209 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:22.831593 kubelet[1884]: E0209 13:22:22.831460 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:23.832336 kubelet[1884]: E0209 13:22:23.832218 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:24.832656 kubelet[1884]: E0209 13:22:24.832538 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:25.832825 kubelet[1884]: E0209 13:22:25.832709 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:26.833900 kubelet[1884]: E0209 13:22:26.833786 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:27.834999 kubelet[1884]: E0209 13:22:27.834885 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:28.835749 kubelet[1884]: E0209 13:22:28.835727 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:29.836844 kubelet[1884]: E0209 13:22:29.836771 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:30.837728 kubelet[1884]: E0209 13:22:30.837651 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:31.837969 kubelet[1884]: E0209 13:22:31.837889 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:32.838923 kubelet[1884]: E0209 13:22:32.838854 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:33.839951 kubelet[1884]: E0209 13:22:33.839875 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:34.841119 kubelet[1884]: E0209 13:22:34.841006 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:35.841642 kubelet[1884]: E0209 13:22:35.841525 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:36.842849 kubelet[1884]: E0209 13:22:36.842726 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:37.574280 kubelet[1884]: E0209 13:22:37.574170 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:37.844042 kubelet[1884]: E0209 13:22:37.843803 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:38.844163 kubelet[1884]: E0209 13:22:38.844047 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:39.844464 kubelet[1884]: E0209 13:22:39.844337 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:40.844841 kubelet[1884]: E0209 13:22:40.844719 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:41.845707 kubelet[1884]: E0209 13:22:41.845594 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:42.845905 kubelet[1884]: E0209 13:22:42.845806 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:42.856426 kubelet[1884]: I0209 13:22:42.856337 1884 topology_manager.go:210] "Topology Admit Handler" Feb 9 13:22:42.869709 systemd[1]: Created slice kubepods-besteffort-podd6b35424_658a_4fba_9cb8_2b698682ca5f.slice. Feb 9 13:22:42.930000 audit[6140]: NETFILTER_CFG table=filter:88 family=2 entries=24 op=nft_register_rule pid=6140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 13:22:42.932092 kubelet[1884]: I0209 13:22:42.932049 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d6b35424-658a-4fba-9cb8-2b698682ca5f-data\") pod \"nfs-server-provisioner-0\" (UID: \"d6b35424-658a-4fba-9cb8-2b698682ca5f\") " pod="default/nfs-server-provisioner-0" Feb 9 13:22:42.932296 kubelet[1884]: I0209 13:22:42.932150 1884 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcwz\" (UniqueName: \"kubernetes.io/projected/d6b35424-658a-4fba-9cb8-2b698682ca5f-kube-api-access-lzcwz\") pod \"nfs-server-provisioner-0\" (UID: \"d6b35424-658a-4fba-9cb8-2b698682ca5f\") " pod="default/nfs-server-provisioner-0" Feb 9 13:22:42.930000 audit[6140]: SYSCALL arch=c000003e syscall=46 success=yes exit=12476 a0=3 a1=7ffec7098be0 a2=0 a3=7ffec7098bcc items=0 ppid=2177 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.100427 kernel: audit: type=1325 audit(1707484962.930:815): table=filter:88 family=2 entries=24 op=nft_register_rule pid=6140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 13:22:43.100472 kernel: audit: type=1300 audit(1707484962.930:815): arch=c000003e syscall=46 success=yes exit=12476 a0=3 a1=7ffec7098be0 a2=0 a3=7ffec7098bcc items=0 ppid=2177 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.100515 kernel: audit: type=1327 audit(1707484962.930:815): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:22:42.930000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:22:42.931000 audit[6140]: NETFILTER_CFG table=nat:89 family=2 entries=30 op=nft_register_rule pid=6140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 13:22:43.175071 env[1471]: time="2024-02-09T13:22:43.175023548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d6b35424-658a-4fba-9cb8-2b698682ca5f,Namespace:default,Attempt:0,}" Feb 9 13:22:42.931000 audit[6140]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffec7098be0 a2=0 a3=31030 items=0 ppid=2177 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.329795 kernel: audit: type=1325 audit(1707484962.931:816): table=nat:89 family=2 entries=30 op=nft_register_rule pid=6140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 13:22:43.329829 kernel: audit: type=1300 audit(1707484962.931:816): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffec7098be0 a2=0 a3=31030 items=0 ppid=2177 pid=6140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.329846 kernel: audit: type=1327 audit(1707484962.931:816): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:22:42.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:22:43.412958 systemd-networkd[1320]: cali60e51b789ff: Link UP Feb 9 13:22:43.470004 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 13:22:43.470051 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 9 13:22:43.470110 systemd-networkd[1320]: cali60e51b789ff: Gained carrier Feb 9 13:22:43.469000 audit[6208]: NETFILTER_CFG table=filter:90 family=2 entries=36 op=nft_register_rule pid=6208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 13:22:43.469000 audit[6208]: SYSCALL arch=c000003e syscall=46 success=yes exit=12476 a0=3 a1=7fffa62f6b90 a2=0 a3=7fffa62f6b7c items=0 ppid=2177 pid=6208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.349 [INFO][6143] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.7-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d6b35424-658a-4fba-9cb8-2b698682ca5f 2177 0 2024-02-09 13:22:42 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.67.80.7 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.7-k8s-nfs--server--provisioner--0-" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.349 [INFO][6143] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.363 [INFO][6163] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" HandleID="k8s-pod-network.63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Workload="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.373 [INFO][6163] ipam_plugin.go 268: Auto assigning IP ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" HandleID="k8s-pod-network.63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Workload="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c1130), Attrs:map[string]string{"namespace":"default", "node":"10.67.80.7", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-09 13:22:43.363887347 +0000 UTC"}, Hostname:"10.67.80.7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.373 [INFO][6163] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.373 [INFO][6163] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.373 [INFO][6163] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.7' Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.376 [INFO][6163] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" host="10.67.80.7" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.380 [INFO][6163] ipam.go 372: Looking up existing affinities for host host="10.67.80.7" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.385 [INFO][6163] ipam.go 489: Trying affinity for 192.168.30.0/26 host="10.67.80.7" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.388 [INFO][6163] ipam.go 155: Attempting to load block cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.392 [INFO][6163] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.30.0/26 host="10.67.80.7" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.392 [INFO][6163] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.30.0/26 handle="k8s-pod-network.63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" host="10.67.80.7" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.395 [INFO][6163] ipam.go 1682: Creating new handle: k8s-pod-network.63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.400 [INFO][6163] ipam.go 1203: Writing block in order to claim IPs block=192.168.30.0/26 handle="k8s-pod-network.63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" host="10.67.80.7" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.410 [INFO][6163] ipam.go 1216: Successfully claimed IPs: [192.168.30.6/26] block=192.168.30.0/26 handle="k8s-pod-network.63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" host="10.67.80.7" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.411 [INFO][6163] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.30.6/26] handle="k8s-pod-network.63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" host="10.67.80.7" Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.411 [INFO][6163] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 13:22:43.530111 env[1471]: 2024-02-09 13:22:43.411 [INFO][6163] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.30.6/26] IPv6=[] ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" HandleID="k8s-pod-network.63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Workload="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" Feb 9 13:22:43.530776 env[1471]: 2024-02-09 13:22:43.411 [INFO][6143] k8s.go 385: Populated endpoint ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d6b35424-658a-4fba-9cb8-2b698682ca5f", ResourceVersion:"2177", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.30.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:22:43.530776 env[1471]: 2024-02-09 13:22:43.412 [INFO][6143] k8s.go 386: Calico CNI using IPs: [192.168.30.6/32] ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" Feb 9 13:22:43.530776 env[1471]: 2024-02-09 13:22:43.412 [INFO][6143] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" Feb 9 13:22:43.530776 env[1471]: 2024-02-09 13:22:43.470 [INFO][6143] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" Feb 9 13:22:43.530965 env[1471]: 2024-02-09 13:22:43.470 [INFO][6143] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.7-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d6b35424-658a-4fba-9cb8-2b698682ca5f", ResourceVersion:"2177", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 13, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.7", ContainerID:"63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.30.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"12:4a:42:35:9b:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 13:22:43.530965 env[1471]: 2024-02-09 13:22:43.483 [INFO][6143] k8s.go 491: Wrote updated endpoint to datastore ContainerID="63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.7-k8s-nfs--server--provisioner--0-eth0" Feb 9 13:22:43.535529 env[1471]: time="2024-02-09T13:22:43.535494259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 13:22:43.535529 env[1471]: time="2024-02-09T13:22:43.535518103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 13:22:43.535529 env[1471]: time="2024-02-09T13:22:43.535527088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 13:22:43.535672 env[1471]: time="2024-02-09T13:22:43.535601244Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d pid=6239 runtime=io.containerd.runc.v2 Feb 9 13:22:43.555075 systemd[1]: Started cri-containerd-63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d.scope. Feb 9 13:22:43.627539 kernel: audit: type=1325 audit(1707484963.469:817): table=filter:90 family=2 entries=36 op=nft_register_rule pid=6208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 13:22:43.627589 kernel: audit: type=1300 audit(1707484963.469:817): arch=c000003e syscall=46 success=yes exit=12476 a0=3 a1=7fffa62f6b90 a2=0 a3=7fffa62f6b7c items=0 ppid=2177 pid=6208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.627635 kernel: audit: type=1327 audit(1707484963.469:817): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:22:43.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:22:43.633000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.686598 kernel: audit: type=1400 audit(1707484963.633:818): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.633000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.633000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.633000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.633000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.633000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.633000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.633000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.633000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.469000 audit[6208]: NETFILTER_CFG table=nat:91 family=2 entries=30 op=nft_register_rule pid=6208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 13:22:43.469000 audit[6208]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7fffa62f6b90 a2=0 a3=31030 items=0 ppid=2177 pid=6208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 13:22:43.747000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.747000 audit: BPF prog-id=108 op=LOAD Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=6239 pid=6250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633616337643165646339653232356362356438613832386339663236 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=6239 pid=6250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633616337643165646339653232356362356438613832386339663236 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit: BPF prog-id=109 op=LOAD Feb 9 13:22:43.748000 audit[6250]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00008e5c0 items=0 ppid=6239 pid=6250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633616337643165646339653232356362356438613832386339663236 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit: BPF prog-id=110 op=LOAD Feb 9 13:22:43.748000 audit[6250]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c00008e608 items=0 ppid=6239 pid=6250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633616337643165646339653232356362356438613832386339663236 Feb 9 13:22:43.748000 audit: BPF prog-id=110 op=UNLOAD Feb 9 13:22:43.748000 audit: BPF prog-id=109 op=UNLOAD Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { perfmon } for pid=6250 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit[6250]: AVC avc: denied { bpf } for pid=6250 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 13:22:43.748000 audit: BPF prog-id=111 op=LOAD Feb 9 13:22:43.748000 audit[6250]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00008ea18 items=0 ppid=6239 pid=6250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633616337643165646339653232356362356438613832386339663236 Feb 9 13:22:43.762000 audit[6266]: NETFILTER_CFG table=filter:92 family=2 entries=46 op=nft_register_chain pid=6266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 13:22:43.762000 audit[6266]: SYSCALL arch=c000003e syscall=46 success=yes exit=21860 a0=3 a1=7ffd0d8cea80 a2=0 a3=7ffd0d8cea6c items=0 ppid=4404 pid=6266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 13:22:43.762000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 13:22:43.778179 env[1471]: time="2024-02-09T13:22:43.778131148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d6b35424-658a-4fba-9cb8-2b698682ca5f,Namespace:default,Attempt:0,} returns sandbox id \"63ac7d1edc9e225cb5d8a828c9f266509c0369e7a5c139662237005ae8a88a2d\"" Feb 9 13:22:43.846277 kubelet[1884]: E0209 13:22:43.846245 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:44.634883 systemd-networkd[1320]: cali60e51b789ff: Gained IPv6LL Feb 9 13:22:44.847175 kubelet[1884]: E0209 13:22:44.847062 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:45.847852 kubelet[1884]: E0209 13:22:45.847727 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:46.848949 kubelet[1884]: E0209 13:22:46.848845 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:47.849597 kubelet[1884]: E0209 13:22:47.849465 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:48.850621 kubelet[1884]: E0209 13:22:48.850530 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:49.851894 kubelet[1884]: E0209 13:22:49.851780 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:50.852607 kubelet[1884]: E0209 13:22:50.852496 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:51.853535 kubelet[1884]: E0209 13:22:51.853426 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:52.854144 kubelet[1884]: E0209 13:22:52.854032 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:53.854695 kubelet[1884]: E0209 13:22:53.854584 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:54.855936 kubelet[1884]: E0209 13:22:54.855814 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:55.856282 kubelet[1884]: E0209 13:22:55.856167 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:56.857183 kubelet[1884]: E0209 13:22:56.857062 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:57.574418 kubelet[1884]: E0209 13:22:57.574302 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:57.857821 kubelet[1884]: E0209 13:22:57.857598 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:58.858298 kubelet[1884]: E0209 13:22:58.858226 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:22:59.859348 kubelet[1884]: E0209 13:22:59.859237 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:00.860076 kubelet[1884]: E0209 13:23:00.859959 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:01.861143 kubelet[1884]: E0209 13:23:01.861020 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:02.861927 kubelet[1884]: E0209 13:23:02.861819 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:03.862138 kubelet[1884]: E0209 13:23:03.862041 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:04.863002 kubelet[1884]: E0209 13:23:04.862891 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:05.863577 kubelet[1884]: E0209 13:23:05.863455 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:06.864584 kubelet[1884]: E0209 13:23:06.864452 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:07.865499 kubelet[1884]: E0209 13:23:07.865390 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:08.866118 kubelet[1884]: E0209 13:23:08.866010 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:09.867120 kubelet[1884]: E0209 13:23:09.867008 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:10.867839 kubelet[1884]: E0209 13:23:10.867730 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:11.868514 kubelet[1884]: E0209 13:23:11.868411 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:12.869589 kubelet[1884]: E0209 13:23:12.869452 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:13.870482 kubelet[1884]: E0209 13:23:13.870377 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:14.871184 kubelet[1884]: E0209 13:23:14.871054 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:15.871943 kubelet[1884]: E0209 13:23:15.871819 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:16.872179 kubelet[1884]: E0209 13:23:16.872066 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:17.574665 kubelet[1884]: E0209 13:23:17.574597 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:17.873095 kubelet[1884]: E0209 13:23:17.872872 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:18.873347 kubelet[1884]: E0209 13:23:18.873236 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:19.874096 kubelet[1884]: E0209 13:23:19.873985 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:20.874585 kubelet[1884]: E0209 13:23:20.874455 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:21.875312 kubelet[1884]: E0209 13:23:21.875203 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:22.875966 kubelet[1884]: E0209 13:23:22.875845 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:23.876980 kubelet[1884]: E0209 13:23:23.876867 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:24.878197 kubelet[1884]: E0209 13:23:24.878087 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:25.879124 kubelet[1884]: E0209 13:23:25.879013 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:26.880203 kubelet[1884]: E0209 13:23:26.880091 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:27.880787 kubelet[1884]: E0209 13:23:27.880676 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:28.881470 kubelet[1884]: E0209 13:23:28.881346 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:29.882523 kubelet[1884]: E0209 13:23:29.882446 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:30.883927 kubelet[1884]: E0209 13:23:30.883852 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:31.884127 kubelet[1884]: E0209 13:23:31.884017 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:32.884667 kubelet[1884]: E0209 13:23:32.884561 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:33.884775 kubelet[1884]: E0209 13:23:33.884698 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:34.885028 kubelet[1884]: E0209 13:23:34.884919 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:35.885355 kubelet[1884]: E0209 13:23:35.885244 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:36.886209 kubelet[1884]: E0209 13:23:36.886101 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:37.574097 kubelet[1884]: E0209 13:23:37.573988 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:37.886545 kubelet[1884]: E0209 13:23:37.886325 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:38.886702 kubelet[1884]: E0209 13:23:38.886586 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:39.887093 kubelet[1884]: E0209 13:23:39.886987 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:40.887820 kubelet[1884]: E0209 13:23:40.887711 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:41.888722 kubelet[1884]: E0209 13:23:41.888612 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:42.889652 kubelet[1884]: E0209 13:23:42.889533 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:43.890651 kubelet[1884]: E0209 13:23:43.890525 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:44.891274 kubelet[1884]: E0209 13:23:44.891154 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:45.891522 kubelet[1884]: E0209 13:23:45.891419 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:46.892302 kubelet[1884]: E0209 13:23:46.892185 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:47.892967 kubelet[1884]: E0209 13:23:47.892846 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:48.893264 kubelet[1884]: E0209 13:23:48.893139 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:49.894494 kubelet[1884]: E0209 13:23:49.894362 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:50.894945 kubelet[1884]: E0209 13:23:50.894835 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:51.895095 kubelet[1884]: E0209 13:23:51.894989 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:52.896168 kubelet[1884]: E0209 13:23:52.896045 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:53.897090 kubelet[1884]: E0209 13:23:53.896979 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:54.897288 kubelet[1884]: E0209 13:23:54.897147 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:55.898357 kubelet[1884]: E0209 13:23:55.898238 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:56.898842 kubelet[1884]: E0209 13:23:56.898734 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:57.574419 kubelet[1884]: E0209 13:23:57.574299 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:57.899132 kubelet[1884]: E0209 13:23:57.898891 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:58.900108 kubelet[1884]: E0209 13:23:58.900033 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:23:59.900830 kubelet[1884]: E0209 13:23:59.900717 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:00.901148 kubelet[1884]: E0209 13:24:00.901036 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:01.902322 kubelet[1884]: E0209 13:24:01.902205 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:02.903225 kubelet[1884]: E0209 13:24:02.903106 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:03.903736 kubelet[1884]: E0209 13:24:03.903663 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:04.904014 kubelet[1884]: E0209 13:24:04.903911 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:05.905009 kubelet[1884]: E0209 13:24:05.904883 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:06.906028 kubelet[1884]: E0209 13:24:06.905917 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:07.907139 kubelet[1884]: E0209 13:24:07.907026 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:08.908160 kubelet[1884]: E0209 13:24:08.908046 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:09.908657 kubelet[1884]: E0209 13:24:09.908585 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:10.909396 kubelet[1884]: E0209 13:24:10.909283 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:11.909722 kubelet[1884]: E0209 13:24:11.909611 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:12.910390 kubelet[1884]: E0209 13:24:12.910285 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:13.911107 kubelet[1884]: E0209 13:24:13.910989 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:14.912142 kubelet[1884]: E0209 13:24:14.912020 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:15.912280 kubelet[1884]: E0209 13:24:15.912168 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:16.913452 kubelet[1884]: E0209 13:24:16.913332 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:17.574538 kubelet[1884]: E0209 13:24:17.574428 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:17.913828 kubelet[1884]: E0209 13:24:17.913594 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:18.914854 kubelet[1884]: E0209 13:24:18.914736 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:19.915044 kubelet[1884]: E0209 13:24:19.914934 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:20.915262 kubelet[1884]: E0209 13:24:20.915159 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:21.916124 kubelet[1884]: E0209 13:24:21.916016 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:22.917349 kubelet[1884]: E0209 13:24:22.917259 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:23.917590 kubelet[1884]: E0209 13:24:23.917480 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:24.918716 kubelet[1884]: E0209 13:24:24.918594 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:25.919741 kubelet[1884]: E0209 13:24:25.919692 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:26.919953 kubelet[1884]: E0209 13:24:26.919847 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:27.920987 kubelet[1884]: E0209 13:24:27.920863 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:28.921232 kubelet[1884]: E0209 13:24:28.921124 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:29.921595 kubelet[1884]: E0209 13:24:29.921479 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:30.922038 kubelet[1884]: E0209 13:24:30.921929 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:31.922472 kubelet[1884]: E0209 13:24:31.922358 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:32.923726 kubelet[1884]: E0209 13:24:32.923608 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:33.924443 kubelet[1884]: E0209 13:24:33.924334 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:34.925434 kubelet[1884]: E0209 13:24:34.925324 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:35.926421 kubelet[1884]: E0209 13:24:35.926304 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:36.927466 kubelet[1884]: E0209 13:24:36.927372 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:37.574479 kubelet[1884]: E0209 13:24:37.574374 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:37.928315 kubelet[1884]: E0209 13:24:37.928098 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:38.928393 kubelet[1884]: E0209 13:24:38.928317 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:39.929712 kubelet[1884]: E0209 13:24:39.929593 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:40.930339 kubelet[1884]: E0209 13:24:40.930223 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:41.931504 kubelet[1884]: E0209 13:24:41.931393 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:42.932380 kubelet[1884]: E0209 13:24:42.932268 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:43.932791 kubelet[1884]: E0209 13:24:43.932669 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:44.933616 kubelet[1884]: E0209 13:24:44.933514 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:45.934544 kubelet[1884]: E0209 13:24:45.934431 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:46.935380 kubelet[1884]: E0209 13:24:46.935259 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:47.936511 kubelet[1884]: E0209 13:24:47.936401 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:48.937676 kubelet[1884]: E0209 13:24:48.937543 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:49.938509 kubelet[1884]: E0209 13:24:49.938399 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:50.939466 kubelet[1884]: E0209 13:24:50.939391 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:51.940069 kubelet[1884]: E0209 13:24:51.939963 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:52.941253 kubelet[1884]: E0209 13:24:52.941146 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:53.941772 kubelet[1884]: E0209 13:24:53.941656 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:54.941978 kubelet[1884]: E0209 13:24:54.941874 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:55.942236 kubelet[1884]: E0209 13:24:55.942123 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:56.942457 kubelet[1884]: E0209 13:24:56.942347 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:57.574724 kubelet[1884]: E0209 13:24:57.574649 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:57.942976 kubelet[1884]: E0209 13:24:57.942781 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:58.943489 kubelet[1884]: E0209 13:24:58.943412 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:24:59.944591 kubelet[1884]: E0209 13:24:59.944482 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:00.945183 kubelet[1884]: E0209 13:25:00.945062 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:01.945371 kubelet[1884]: E0209 13:25:01.945260 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:02.946319 kubelet[1884]: E0209 13:25:02.946207 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:03.947447 kubelet[1884]: E0209 13:25:03.947341 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:04.948148 kubelet[1884]: E0209 13:25:04.948039 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:05.948954 kubelet[1884]: E0209 13:25:05.948848 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:06.949410 kubelet[1884]: E0209 13:25:06.949304 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:07.949845 kubelet[1884]: E0209 13:25:07.949734 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:08.950593 kubelet[1884]: E0209 13:25:08.950451 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:09.951651 kubelet[1884]: E0209 13:25:09.951523 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:10.953004 kubelet[1884]: E0209 13:25:10.952888 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:11.953393 kubelet[1884]: E0209 13:25:11.953276 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:12.953596 kubelet[1884]: E0209 13:25:12.953454 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:13.953883 kubelet[1884]: E0209 13:25:13.953764 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:14.954576 kubelet[1884]: E0209 13:25:14.954440 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:15.955196 kubelet[1884]: E0209 13:25:15.955084 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:16.955831 kubelet[1884]: E0209 13:25:16.955739 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:17.573861 kubelet[1884]: E0209 13:25:17.573747 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:17.956240 kubelet[1884]: E0209 13:25:17.956007 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:18.957180 kubelet[1884]: E0209 13:25:18.957068 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:19.958334 kubelet[1884]: E0209 13:25:19.958200 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:20.959318 kubelet[1884]: E0209 13:25:20.959172 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:21.959895 kubelet[1884]: E0209 13:25:21.959786 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:22.960140 kubelet[1884]: E0209 13:25:22.960031 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:23.960606 kubelet[1884]: E0209 13:25:23.960449 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:24.960749 kubelet[1884]: E0209 13:25:24.960646 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:25.961673 kubelet[1884]: E0209 13:25:25.961562 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:26.962688 kubelet[1884]: E0209 13:25:26.962585 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:27.963053 kubelet[1884]: E0209 13:25:27.962936 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:28.964160 kubelet[1884]: E0209 13:25:28.964030 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:29.965303 kubelet[1884]: E0209 13:25:29.965193 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:30.966089 kubelet[1884]: E0209 13:25:30.966014 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:31.966948 kubelet[1884]: E0209 13:25:31.966872 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:32.967109 kubelet[1884]: E0209 13:25:32.967028 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:33.968148 kubelet[1884]: E0209 13:25:33.968032 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:34.968630 kubelet[1884]: E0209 13:25:34.968560 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:35.969818 kubelet[1884]: E0209 13:25:35.969728 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:36.970690 kubelet[1884]: E0209 13:25:36.970618 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:37.573798 kubelet[1884]: E0209 13:25:37.573723 1884 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:37.971781 kubelet[1884]: E0209 13:25:37.971590 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:38.972809 kubelet[1884]: E0209 13:25:38.972728 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:39.973264 kubelet[1884]: E0209 13:25:39.973144 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:40.974282 kubelet[1884]: E0209 13:25:40.974174 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:41.974585 kubelet[1884]: E0209 13:25:41.974433 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:42.975486 kubelet[1884]: E0209 13:25:42.975366 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:43.976684 kubelet[1884]: E0209 13:25:43.976609 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:44.977088 kubelet[1884]: E0209 13:25:44.977006 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:45.978139 kubelet[1884]: E0209 13:25:45.978023 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:46.978811 kubelet[1884]: E0209 13:25:46.978710 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:47.980035 kubelet[1884]: E0209 13:25:47.979913 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 13:25:48.980351 kubelet[1884]: E0209 13:25:48.980219 1884 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"