Feb 13 10:00:31.553396 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 13 10:00:31.553410 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 10:00:31.553416 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 10:00:31.553434 kernel: BIOS-provided physical RAM map: Feb 13 10:00:31.553438 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 10:00:31.553441 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 10:00:31.553446 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 10:00:31.553450 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 10:00:31.553454 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 10:00:31.553458 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfb3fff] usable Feb 13 10:00:31.553461 kernel: BIOS-e820: [mem 0x000000006dfb4000-0x000000006dfb4fff] ACPI NVS Feb 13 10:00:31.553465 kernel: BIOS-e820: [mem 0x000000006dfb5000-0x000000006dfb5fff] reserved Feb 13 10:00:31.553469 kernel: BIOS-e820: [mem 0x000000006dfb6000-0x0000000077fc6fff] usable Feb 13 10:00:31.553472 kernel: BIOS-e820: [mem 0x0000000077fc7000-0x00000000790a9fff] reserved Feb 13 10:00:31.553479 kernel: BIOS-e820: [mem 0x00000000790aa000-0x0000000079232fff] usable Feb 13 10:00:31.553483 kernel: BIOS-e820: [mem 0x0000000079233000-0x0000000079664fff] ACPI NVS Feb 13 10:00:31.553488 kernel: BIOS-e820: [mem 0x0000000079665000-0x000000007befefff] reserved Feb 13 10:00:31.553493 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Feb 13 10:00:31.553497 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Feb 13 10:00:31.553501 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 10:00:31.553505 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 10:00:31.553510 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 10:00:31.553515 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 10:00:31.553520 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 10:00:31.553524 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Feb 13 10:00:31.553528 kernel: NX (Execute Disable) protection: active Feb 13 10:00:31.553532 kernel: SMBIOS 3.2.1 present. Feb 13 10:00:31.553536 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 13 10:00:31.553540 kernel: tsc: Detected 3400.000 MHz processor Feb 13 10:00:31.553544 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 10:00:31.553548 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 10:00:31.553553 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 10:00:31.553557 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Feb 13 10:00:31.553562 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 10:00:31.553567 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Feb 13 10:00:31.553571 kernel: Using GB pages for direct mapping Feb 13 10:00:31.553575 kernel: ACPI: Early table checksum verification disabled Feb 13 10:00:31.553579 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 10:00:31.553583 kernel: ACPI: XSDT 0x00000000795460C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 10:00:31.553588 kernel: ACPI: FACP 0x0000000079582620 000114 (v06 01072009 AMI 00010013) Feb 13 10:00:31.553594 kernel: ACPI: DSDT 0x0000000079546268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 10:00:31.553599 kernel: ACPI: FACS 0x0000000079664F80 000040 Feb 13 10:00:31.553604 kernel: ACPI: APIC 0x0000000079582738 00012C (v04 01072009 AMI 00010013) Feb 13 10:00:31.553608 kernel: ACPI: FPDT 0x0000000079582868 000044 (v01 01072009 AMI 00010013) Feb 13 10:00:31.553613 kernel: ACPI: FIDT 0x00000000795828B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 10:00:31.553617 kernel: ACPI: MCFG 0x0000000079582950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 10:00:31.553622 kernel: ACPI: SPMI 0x0000000079582990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 10:00:31.553627 kernel: ACPI: SSDT 0x00000000795829D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 10:00:31.553632 kernel: ACPI: SSDT 0x00000000795844F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 10:00:31.553636 kernel: ACPI: SSDT 0x00000000795876C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 10:00:31.553641 kernel: ACPI: HPET 0x00000000795899F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 10:00:31.553645 kernel: ACPI: SSDT 0x0000000079589A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 10:00:31.553650 kernel: ACPI: SSDT 0x000000007958A9D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 10:00:31.553654 kernel: ACPI: UEFI 0x000000007958B2D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 10:00:31.553659 kernel: ACPI: LPIT 0x000000007958B318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 10:00:31.553663 kernel: ACPI: SSDT 0x000000007958B3B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 10:00:31.553669 kernel: ACPI: SSDT 0x000000007958DB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 10:00:31.553673 kernel: ACPI: DBGP 0x000000007958F078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 10:00:31.553678 kernel: ACPI: DBG2 0x000000007958F0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 10:00:31.553682 kernel: ACPI: SSDT 0x000000007958F108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 10:00:31.553686 kernel: ACPI: DMAR 0x0000000079590C70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 13 10:00:31.553691 kernel: ACPI: SSDT 0x0000000079590D18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 10:00:31.553695 kernel: ACPI: TPM2 0x0000000079590E60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 10:00:31.553700 kernel: ACPI: SSDT 0x0000000079590E98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 10:00:31.553706 kernel: ACPI: WSMT 0x0000000079591C28 000028 (v01 \xf4m 01072009 AMI 00010013) Feb 13 10:00:31.553710 kernel: ACPI: EINJ 0x0000000079591C50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 10:00:31.553715 kernel: ACPI: ERST 0x0000000079591D80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 10:00:31.553719 kernel: ACPI: BERT 0x0000000079591FB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 10:00:31.553724 kernel: ACPI: HEST 0x0000000079591FE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 10:00:31.553728 kernel: ACPI: SSDT 0x0000000079592260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 10:00:31.553733 kernel: ACPI: Reserving FACP table memory at [mem 0x79582620-0x79582733] Feb 13 10:00:31.553737 kernel: ACPI: Reserving DSDT table memory at [mem 0x79546268-0x7958261e] Feb 13 10:00:31.553742 kernel: ACPI: Reserving FACS table memory at [mem 0x79664f80-0x79664fbf] Feb 13 10:00:31.553747 kernel: ACPI: Reserving APIC table memory at [mem 0x79582738-0x79582863] Feb 13 10:00:31.553752 kernel: ACPI: Reserving FPDT table memory at [mem 0x79582868-0x795828ab] Feb 13 10:00:31.553756 kernel: ACPI: Reserving FIDT table memory at [mem 0x795828b0-0x7958294b] Feb 13 10:00:31.553761 kernel: ACPI: Reserving MCFG table memory at [mem 0x79582950-0x7958298b] Feb 13 10:00:31.553765 kernel: ACPI: Reserving SPMI table memory at [mem 0x79582990-0x795829d0] Feb 13 10:00:31.553770 kernel: ACPI: Reserving SSDT table memory at [mem 0x795829d8-0x795844f3] Feb 13 10:00:31.553774 kernel: ACPI: Reserving SSDT table memory at [mem 0x795844f8-0x795876bd] Feb 13 10:00:31.553778 kernel: ACPI: Reserving SSDT table memory at [mem 0x795876c0-0x795899ea] Feb 13 10:00:31.553783 kernel: ACPI: Reserving HPET table memory at [mem 0x795899f0-0x79589a27] Feb 13 10:00:31.553788 kernel: ACPI: Reserving SSDT table memory at [mem 0x79589a28-0x7958a9d5] Feb 13 10:00:31.553793 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958a9d8-0x7958b2ce] Feb 13 10:00:31.553797 kernel: ACPI: Reserving UEFI table memory at [mem 0x7958b2d0-0x7958b311] Feb 13 10:00:31.553801 kernel: ACPI: Reserving LPIT table memory at [mem 0x7958b318-0x7958b3ab] Feb 13 10:00:31.553806 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958b3b0-0x7958db8d] Feb 13 10:00:31.553810 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958db90-0x7958f071] Feb 13 10:00:31.553815 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958f078-0x7958f0ab] Feb 13 10:00:31.553819 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958f0b0-0x7958f103] Feb 13 10:00:31.553824 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958f108-0x79590c6e] Feb 13 10:00:31.553829 kernel: ACPI: Reserving DMAR table memory at [mem 0x79590c70-0x79590d17] Feb 13 10:00:31.553834 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590d18-0x79590e5b] Feb 13 10:00:31.553838 kernel: ACPI: Reserving TPM2 table memory at [mem 0x79590e60-0x79590e93] Feb 13 10:00:31.553843 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590e98-0x79591c26] Feb 13 10:00:31.553847 kernel: ACPI: Reserving WSMT table memory at [mem 0x79591c28-0x79591c4f] Feb 13 10:00:31.553851 kernel: ACPI: Reserving EINJ table memory at [mem 0x79591c50-0x79591d7f] Feb 13 10:00:31.553856 kernel: ACPI: Reserving ERST table memory at [mem 0x79591d80-0x79591faf] Feb 13 10:00:31.553860 kernel: ACPI: Reserving BERT table memory at [mem 0x79591fb0-0x79591fdf] Feb 13 10:00:31.553865 kernel: ACPI: Reserving HEST table memory at [mem 0x79591fe0-0x7959225b] Feb 13 10:00:31.553870 kernel: ACPI: Reserving SSDT table memory at [mem 0x79592260-0x795923c1] Feb 13 10:00:31.553875 kernel: No NUMA configuration found Feb 13 10:00:31.553879 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Feb 13 10:00:31.553884 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Feb 13 10:00:31.553888 kernel: Zone ranges: Feb 13 10:00:31.553893 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 10:00:31.553897 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 10:00:31.553902 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Feb 13 10:00:31.553906 kernel: Movable zone start for each node Feb 13 10:00:31.553912 kernel: Early memory node ranges Feb 13 10:00:31.553917 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 10:00:31.553921 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 10:00:31.553926 kernel: node 0: [mem 0x0000000040400000-0x000000006dfb3fff] Feb 13 10:00:31.553930 kernel: node 0: [mem 0x000000006dfb6000-0x0000000077fc6fff] Feb 13 10:00:31.553934 kernel: node 0: [mem 0x00000000790aa000-0x0000000079232fff] Feb 13 10:00:31.553939 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Feb 13 10:00:31.553943 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Feb 13 10:00:31.553948 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Feb 13 10:00:31.553956 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 10:00:31.553961 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 10:00:31.553966 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 10:00:31.553972 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 10:00:31.553976 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 13 10:00:31.553981 kernel: On node 0, zone DMA32: 11468 pages in unavailable ranges Feb 13 10:00:31.553986 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Feb 13 10:00:31.553991 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Feb 13 10:00:31.553997 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 10:00:31.554002 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 10:00:31.554006 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 10:00:31.554011 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 10:00:31.554016 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 10:00:31.554021 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 10:00:31.554026 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 10:00:31.554030 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 10:00:31.554035 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 10:00:31.554041 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 10:00:31.554046 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 10:00:31.554050 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 10:00:31.554055 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 10:00:31.554060 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 10:00:31.554065 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 10:00:31.554069 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 10:00:31.554074 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 10:00:31.554079 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 10:00:31.554085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 10:00:31.554090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 10:00:31.554095 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 10:00:31.554099 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 10:00:31.554104 kernel: TSC deadline timer available Feb 13 10:00:31.554109 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 10:00:31.554114 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Feb 13 10:00:31.554119 kernel: Booting paravirtualized kernel on bare hardware Feb 13 10:00:31.554124 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 10:00:31.554130 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 10:00:31.554134 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 10:00:31.554139 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 10:00:31.554144 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 10:00:31.554149 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222329 Feb 13 10:00:31.554153 kernel: Policy zone: Normal Feb 13 10:00:31.554159 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 10:00:31.554164 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 10:00:31.554169 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 10:00:31.554174 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 10:00:31.554179 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 10:00:31.554184 kernel: Memory: 32683736K/33411996K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 10:00:31.554189 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 10:00:31.554194 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 10:00:31.554199 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 10:00:31.554204 kernel: rcu: Hierarchical RCU implementation. Feb 13 10:00:31.554209 kernel: rcu: RCU event tracing is enabled. Feb 13 10:00:31.554214 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 10:00:31.554219 kernel: Rude variant of Tasks RCU enabled. Feb 13 10:00:31.554224 kernel: Tracing variant of Tasks RCU enabled. Feb 13 10:00:31.554229 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 10:00:31.554234 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 10:00:31.554239 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 10:00:31.554244 kernel: random: crng init done Feb 13 10:00:31.554248 kernel: Console: colour dummy device 80x25 Feb 13 10:00:31.554253 kernel: printk: console [tty0] enabled Feb 13 10:00:31.554259 kernel: printk: console [ttyS1] enabled Feb 13 10:00:31.554264 kernel: ACPI: Core revision 20210730 Feb 13 10:00:31.554269 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 13 10:00:31.554273 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 10:00:31.554278 kernel: DMAR: Host address width 39 Feb 13 10:00:31.554283 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 13 10:00:31.554288 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 13 10:00:31.554293 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 10:00:31.554297 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 10:00:31.554303 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Feb 13 10:00:31.554308 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Feb 13 10:00:31.554313 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 13 10:00:31.554318 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 10:00:31.554322 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 10:00:31.554327 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 10:00:31.554332 kernel: x2apic enabled Feb 13 10:00:31.554337 kernel: Switched APIC routing to cluster x2apic. Feb 13 10:00:31.554342 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 10:00:31.554347 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 10:00:31.554352 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 10:00:31.554357 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 10:00:31.554362 kernel: process: using mwait in idle threads Feb 13 10:00:31.554367 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 10:00:31.554374 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 10:00:31.554396 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 10:00:31.554401 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 10:00:31.554406 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 10:00:31.554425 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 10:00:31.554430 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 10:00:31.554435 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 10:00:31.554440 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 10:00:31.554445 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 10:00:31.554450 kernel: TAA: Mitigation: TSX disabled Feb 13 10:00:31.554455 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 10:00:31.554459 kernel: SRBDS: Mitigation: Microcode Feb 13 10:00:31.554464 kernel: GDS: Vulnerable: No microcode Feb 13 10:00:31.554470 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 10:00:31.554475 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 10:00:31.554480 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 10:00:31.554484 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 10:00:31.554489 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 10:00:31.554494 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 10:00:31.554499 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 10:00:31.554504 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 10:00:31.554508 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 10:00:31.554514 kernel: Freeing SMP alternatives memory: 32K Feb 13 10:00:31.554519 kernel: pid_max: default: 32768 minimum: 301 Feb 13 10:00:31.554524 kernel: LSM: Security Framework initializing Feb 13 10:00:31.554529 kernel: SELinux: Initializing. Feb 13 10:00:31.554533 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 10:00:31.554538 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 10:00:31.554543 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 10:00:31.554548 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 10:00:31.554553 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 10:00:31.554559 kernel: ... version: 4 Feb 13 10:00:31.554563 kernel: ... bit width: 48 Feb 13 10:00:31.554568 kernel: ... generic registers: 4 Feb 13 10:00:31.554573 kernel: ... value mask: 0000ffffffffffff Feb 13 10:00:31.554578 kernel: ... max period: 00007fffffffffff Feb 13 10:00:31.554583 kernel: ... fixed-purpose events: 3 Feb 13 10:00:31.554587 kernel: ... event mask: 000000070000000f Feb 13 10:00:31.554592 kernel: signal: max sigframe size: 2032 Feb 13 10:00:31.554597 kernel: rcu: Hierarchical SRCU implementation. Feb 13 10:00:31.554603 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 10:00:31.554607 kernel: smp: Bringing up secondary CPUs ... Feb 13 10:00:31.554612 kernel: x86: Booting SMP configuration: Feb 13 10:00:31.554617 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 10:00:31.554622 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 10:00:31.554627 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 10:00:31.554632 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 10:00:31.554637 kernel: smpboot: Max logical packages: 1 Feb 13 10:00:31.554642 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 10:00:31.554647 kernel: devtmpfs: initialized Feb 13 10:00:31.554652 kernel: x86/mm: Memory block size: 128MB Feb 13 10:00:31.554657 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfb4000-0x6dfb4fff] (4096 bytes) Feb 13 10:00:31.554662 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79233000-0x79664fff] (4399104 bytes) Feb 13 10:00:31.554667 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 10:00:31.554672 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 10:00:31.554676 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 10:00:31.554681 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 10:00:31.554687 kernel: audit: initializing netlink subsys (disabled) Feb 13 10:00:31.554692 kernel: audit: type=2000 audit(1707818426.120:1): state=initialized audit_enabled=0 res=1 Feb 13 10:00:31.554696 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 10:00:31.554701 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 10:00:31.554706 kernel: cpuidle: using governor menu Feb 13 10:00:31.554711 kernel: ACPI: bus type PCI registered Feb 13 10:00:31.554716 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 10:00:31.554721 kernel: dca service started, version 1.12.1 Feb 13 10:00:31.554725 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 10:00:31.554731 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 10:00:31.554736 kernel: PCI: Using configuration type 1 for base access Feb 13 10:00:31.554741 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 10:00:31.554746 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 10:00:31.554751 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 10:00:31.554755 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 10:00:31.554760 kernel: ACPI: Added _OSI(Module Device) Feb 13 10:00:31.554765 kernel: ACPI: Added _OSI(Processor Device) Feb 13 10:00:31.554770 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 10:00:31.554775 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 10:00:31.554780 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 10:00:31.554785 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 10:00:31.554790 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 10:00:31.554795 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 10:00:31.554800 kernel: ACPI: Dynamic OEM Table Load: Feb 13 10:00:31.554804 kernel: ACPI: SSDT 0xFFFF9F2700214200 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 10:00:31.554809 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 10:00:31.554814 kernel: ACPI: Dynamic OEM Table Load: Feb 13 10:00:31.554819 kernel: ACPI: SSDT 0xFFFF9F2701CEB400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 10:00:31.554825 kernel: ACPI: Dynamic OEM Table Load: Feb 13 10:00:31.554829 kernel: ACPI: SSDT 0xFFFF9F2701C5D800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 10:00:31.554834 kernel: ACPI: Dynamic OEM Table Load: Feb 13 10:00:31.554839 kernel: ACPI: SSDT 0xFFFF9F2701C5C000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 10:00:31.554844 kernel: ACPI: Dynamic OEM Table Load: Feb 13 10:00:31.554848 kernel: ACPI: SSDT 0xFFFF9F270014E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 10:00:31.554853 kernel: ACPI: Dynamic OEM Table Load: Feb 13 10:00:31.554858 kernel: ACPI: SSDT 0xFFFF9F2701CE8800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 10:00:31.554863 kernel: ACPI: Interpreter enabled Feb 13 10:00:31.554868 kernel: ACPI: PM: (supports S0 S5) Feb 13 10:00:31.554873 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 10:00:31.554878 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 10:00:31.554883 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 10:00:31.554888 kernel: HEST: Table parsing has been initialized. Feb 13 10:00:31.554892 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 10:00:31.554897 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 10:00:31.554902 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 10:00:31.554907 kernel: ACPI: PM: Power Resource [USBC] Feb 13 10:00:31.554913 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 10:00:31.554917 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 10:00:31.554922 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 10:00:31.554927 kernel: ACPI: PM: Power Resource [WRST] Feb 13 10:00:31.554932 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 13 10:00:31.554937 kernel: ACPI: PM: Power Resource [FN00] Feb 13 10:00:31.554941 kernel: ACPI: PM: Power Resource [FN01] Feb 13 10:00:31.554946 kernel: ACPI: PM: Power Resource [FN02] Feb 13 10:00:31.554951 kernel: ACPI: PM: Power Resource [FN03] Feb 13 10:00:31.554956 kernel: ACPI: PM: Power Resource [FN04] Feb 13 10:00:31.554961 kernel: ACPI: PM: Power Resource [PIN] Feb 13 10:00:31.554966 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 10:00:31.555029 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 10:00:31.555072 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 10:00:31.555112 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 10:00:31.555119 kernel: PCI host bridge to bus 0000:00 Feb 13 10:00:31.555163 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 10:00:31.555203 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 10:00:31.555238 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 10:00:31.555273 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Feb 13 10:00:31.555308 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 10:00:31.555343 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 10:00:31.555413 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 10:00:31.555476 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 10:00:31.555518 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 10:00:31.555563 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 13 10:00:31.555604 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 13 10:00:31.555650 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 13 10:00:31.555692 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Feb 13 10:00:31.555735 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 13 10:00:31.555777 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 13 10:00:31.555824 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 10:00:31.555867 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Feb 13 10:00:31.555910 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 10:00:31.555952 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Feb 13 10:00:31.555995 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 10:00:31.556039 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Feb 13 10:00:31.556079 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 10:00:31.556124 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 10:00:31.556163 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Feb 13 10:00:31.556204 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Feb 13 10:00:31.556247 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 10:00:31.556290 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 10:00:31.556333 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 10:00:31.556375 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 10:00:31.556452 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 10:00:31.556493 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Feb 13 10:00:31.556540 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 10:00:31.556586 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 10:00:31.556627 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Feb 13 10:00:31.556669 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 10:00:31.556713 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 10:00:31.556754 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Feb 13 10:00:31.556795 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 10:00:31.556839 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 10:00:31.556882 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Feb 13 10:00:31.556921 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Feb 13 10:00:31.556962 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 13 10:00:31.557001 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 13 10:00:31.557041 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 13 10:00:31.557081 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Feb 13 10:00:31.557121 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 10:00:31.557168 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 10:00:31.557210 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 10:00:31.557256 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 10:00:31.557299 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 10:00:31.557348 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 10:00:31.557407 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 10:00:31.557467 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 10:00:31.557508 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 10:00:31.557553 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 13 10:00:31.557594 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 13 10:00:31.557640 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 10:00:31.557681 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 10:00:31.557725 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 10:00:31.557770 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 10:00:31.557811 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Feb 13 10:00:31.557852 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 10:00:31.557897 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 10:00:31.557938 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 10:00:31.557979 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 10:00:31.558028 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 10:00:31.558072 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 10:00:31.558113 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Feb 13 10:00:31.558155 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 13 10:00:31.558198 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 10:00:31.558241 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 10:00:31.558287 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 10:00:31.558330 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 10:00:31.558371 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Feb 13 10:00:31.558455 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 13 10:00:31.558497 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 10:00:31.558540 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 10:00:31.558581 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 10:00:31.558623 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 13 10:00:31.558664 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 10:00:31.558704 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 10:00:31.558751 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 10:00:31.558794 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Feb 13 10:00:31.558838 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 10:00:31.558879 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Feb 13 10:00:31.558922 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 10:00:31.558962 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 10:00:31.559002 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 10:00:31.559043 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 13 10:00:31.559088 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 13 10:00:31.559166 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Feb 13 10:00:31.559230 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 10:00:31.559272 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Feb 13 10:00:31.559313 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 13 10:00:31.559355 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 10:00:31.559418 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 10:00:31.559478 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 13 10:00:31.559519 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 10:00:31.559563 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 10:00:31.559609 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 13 10:00:31.559651 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 10:00:31.559694 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 10:00:31.559734 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 10:00:31.559775 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 10:00:31.559815 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 13 10:00:31.559861 kernel: pci_bus 0000:08: extended config space not accessible Feb 13 10:00:31.559913 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 10:00:31.559960 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Feb 13 10:00:31.560004 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Feb 13 10:00:31.560048 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 10:00:31.560091 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 10:00:31.560134 kernel: pci 0000:08:00.0: supports D1 D2 Feb 13 10:00:31.560178 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 10:00:31.560223 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 10:00:31.560264 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 10:00:31.560307 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 13 10:00:31.560314 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 10:00:31.560320 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 10:00:31.560325 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 10:00:31.560330 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 10:00:31.560335 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 10:00:31.560342 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 10:00:31.560347 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 10:00:31.560352 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 10:00:31.560358 kernel: iommu: Default domain type: Translated Feb 13 10:00:31.560363 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 10:00:31.560445 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 13 10:00:31.560490 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 10:00:31.560533 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 13 10:00:31.560540 kernel: vgaarb: loaded Feb 13 10:00:31.560547 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 10:00:31.560553 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 10:00:31.560558 kernel: PTP clock support registered Feb 13 10:00:31.560563 kernel: PCI: Using ACPI for IRQ routing Feb 13 10:00:31.560568 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 10:00:31.560573 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 10:00:31.560579 kernel: e820: reserve RAM buffer [mem 0x6dfb4000-0x6fffffff] Feb 13 10:00:31.560584 kernel: e820: reserve RAM buffer [mem 0x77fc7000-0x77ffffff] Feb 13 10:00:31.560589 kernel: e820: reserve RAM buffer [mem 0x79233000-0x7bffffff] Feb 13 10:00:31.560594 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Feb 13 10:00:31.560599 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Feb 13 10:00:31.560604 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 10:00:31.560609 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 13 10:00:31.560615 kernel: clocksource: Switched to clocksource tsc-early Feb 13 10:00:31.560620 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 10:00:31.560625 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 10:00:31.560630 kernel: pnp: PnP ACPI init Feb 13 10:00:31.560673 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 10:00:31.560715 kernel: pnp 00:02: [dma 0 disabled] Feb 13 10:00:31.560756 kernel: pnp 00:03: [dma 0 disabled] Feb 13 10:00:31.560796 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 10:00:31.560833 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 10:00:31.560874 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 10:00:31.560914 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 10:00:31.560954 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 10:00:31.560990 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 10:00:31.561026 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 10:00:31.561063 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 10:00:31.561098 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 10:00:31.561135 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 10:00:31.561173 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 10:00:31.561215 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 10:00:31.561251 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 10:00:31.561287 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 10:00:31.561325 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 10:00:31.561360 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 10:00:31.561441 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 10:00:31.561477 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 10:00:31.561519 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 10:00:31.561527 kernel: pnp: PnP ACPI: found 10 devices Feb 13 10:00:31.561532 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 10:00:31.561538 kernel: NET: Registered PF_INET protocol family Feb 13 10:00:31.561543 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 10:00:31.561548 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 10:00:31.561553 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 10:00:31.561558 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 10:00:31.561565 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 10:00:31.561570 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 10:00:31.561575 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 10:00:31.561580 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 10:00:31.561586 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 10:00:31.561591 kernel: NET: Registered PF_XDP protocol family Feb 13 10:00:31.561631 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Feb 13 10:00:31.561674 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Feb 13 10:00:31.561717 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Feb 13 10:00:31.561758 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 10:00:31.561802 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 10:00:31.561845 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 10:00:31.561888 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 10:00:31.561932 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 10:00:31.561974 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 10:00:31.562015 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 13 10:00:31.562057 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 10:00:31.562098 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 10:00:31.562159 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 10:00:31.562201 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 10:00:31.562243 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 13 10:00:31.562286 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 10:00:31.562328 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 10:00:31.562370 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 13 10:00:31.562415 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 10:00:31.562458 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 10:00:31.562501 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 10:00:31.562544 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 13 10:00:31.562587 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 10:00:31.562628 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 10:00:31.562672 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 13 10:00:31.562710 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 10:00:31.562747 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 10:00:31.562784 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 10:00:31.562821 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 10:00:31.562857 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Feb 13 10:00:31.562893 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 10:00:31.562936 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Feb 13 10:00:31.562978 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 10:00:31.563021 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 13 10:00:31.563060 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Feb 13 10:00:31.563103 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 10:00:31.563143 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Feb 13 10:00:31.563185 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 10:00:31.563226 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Feb 13 10:00:31.563266 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 13 10:00:31.563307 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Feb 13 10:00:31.563314 kernel: PCI: CLS 64 bytes, default 64 Feb 13 10:00:31.563320 kernel: DMAR: No ATSR found Feb 13 10:00:31.563325 kernel: DMAR: No SATC found Feb 13 10:00:31.563330 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 13 10:00:31.563336 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 13 10:00:31.563342 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 13 10:00:31.563348 kernel: DMAR: IOMMU feature pasid inconsistent Feb 13 10:00:31.563353 kernel: DMAR: IOMMU feature eafs inconsistent Feb 13 10:00:31.563358 kernel: DMAR: IOMMU feature prs inconsistent Feb 13 10:00:31.563364 kernel: DMAR: IOMMU feature nest inconsistent Feb 13 10:00:31.563369 kernel: DMAR: IOMMU feature mts inconsistent Feb 13 10:00:31.563376 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 13 10:00:31.563381 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 13 10:00:31.563387 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 10:00:31.563393 kernel: DMAR: dmar1: Using Queued invalidation Feb 13 10:00:31.563435 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 10:00:31.563478 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 10:00:31.563520 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 13 10:00:31.563561 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 13 10:00:31.563602 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 13 10:00:31.563643 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 13 10:00:31.563685 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 13 10:00:31.563727 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 13 10:00:31.563769 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 13 10:00:31.563809 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 13 10:00:31.563850 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 13 10:00:31.563891 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 13 10:00:31.563931 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 13 10:00:31.563972 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 13 10:00:31.564013 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 13 10:00:31.564055 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 13 10:00:31.564097 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 13 10:00:31.564138 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 13 10:00:31.564180 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 13 10:00:31.564220 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 13 10:00:31.564262 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 13 10:00:31.564303 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 13 10:00:31.564344 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 13 10:00:31.564390 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 13 10:00:31.564435 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 13 10:00:31.564477 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 10:00:31.564521 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 13 10:00:31.564564 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 13 10:00:31.564609 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 13 10:00:31.564617 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 10:00:31.564622 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 10:00:31.564629 kernel: software IO TLB: mapped [mem 0x0000000073fc7000-0x0000000077fc7000] (64MB) Feb 13 10:00:31.564634 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 13 10:00:31.564640 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 10:00:31.564645 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 10:00:31.564650 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 10:00:31.564656 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 13 10:00:31.564699 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 10:00:31.564707 kernel: Initialise system trusted keyrings Feb 13 10:00:31.564714 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 10:00:31.564719 kernel: Key type asymmetric registered Feb 13 10:00:31.564725 kernel: Asymmetric key parser 'x509' registered Feb 13 10:00:31.564730 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 10:00:31.564735 kernel: io scheduler mq-deadline registered Feb 13 10:00:31.564741 kernel: io scheduler kyber registered Feb 13 10:00:31.564746 kernel: io scheduler bfq registered Feb 13 10:00:31.564787 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 13 10:00:31.564829 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 13 10:00:31.564872 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 13 10:00:31.564914 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 13 10:00:31.564955 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 13 10:00:31.564996 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 13 10:00:31.565038 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 13 10:00:31.565084 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 10:00:31.565092 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 10:00:31.565099 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 10:00:31.565104 kernel: pstore: Registered erst as persistent store backend Feb 13 10:00:31.565110 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 10:00:31.565115 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 10:00:31.565120 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 10:00:31.565126 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 10:00:31.565168 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 10:00:31.565176 kernel: i8042: PNP: No PS/2 controller found. Feb 13 10:00:31.565215 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 10:00:31.565254 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 10:00:31.565291 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T10:00:30 UTC (1707818430) Feb 13 10:00:31.565329 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 10:00:31.565337 kernel: fail to initialize ptp_kvm Feb 13 10:00:31.565342 kernel: intel_pstate: Intel P-state driver initializing Feb 13 10:00:31.565348 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 10:00:31.565353 kernel: intel_pstate: HWP enabled Feb 13 10:00:31.565358 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 10:00:31.565365 kernel: vesafb: scrolling: redraw Feb 13 10:00:31.565370 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 10:00:31.565377 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x00000000ed940437, using 768k, total 768k Feb 13 10:00:31.565383 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 10:00:31.565388 kernel: fb0: VESA VGA frame buffer device Feb 13 10:00:31.565393 kernel: NET: Registered PF_INET6 protocol family Feb 13 10:00:31.565399 kernel: Segment Routing with IPv6 Feb 13 10:00:31.565404 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 10:00:31.565409 kernel: NET: Registered PF_PACKET protocol family Feb 13 10:00:31.565415 kernel: Key type dns_resolver registered Feb 13 10:00:31.565420 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 10:00:31.565426 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 10:00:31.565431 kernel: IPI shorthand broadcast: enabled Feb 13 10:00:31.565436 kernel: sched_clock: Marking stable (1848771003, 1360190497)->(4632547187, -1423585687) Feb 13 10:00:31.565442 kernel: registered taskstats version 1 Feb 13 10:00:31.565447 kernel: Loading compiled-in X.509 certificates Feb 13 10:00:31.565452 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 10:00:31.565457 kernel: Key type .fscrypt registered Feb 13 10:00:31.565464 kernel: Key type fscrypt-provisioning registered Feb 13 10:00:31.565469 kernel: pstore: Using crash dump compression: deflate Feb 13 10:00:31.565474 kernel: ima: Allocated hash algorithm: sha1 Feb 13 10:00:31.565479 kernel: ima: No architecture policies found Feb 13 10:00:31.565485 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 10:00:31.565490 kernel: Write protecting the kernel read-only data: 28672k Feb 13 10:00:31.565495 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 10:00:31.565501 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 10:00:31.565507 kernel: Run /init as init process Feb 13 10:00:31.565512 kernel: with arguments: Feb 13 10:00:31.565518 kernel: /init Feb 13 10:00:31.565523 kernel: with environment: Feb 13 10:00:31.565528 kernel: HOME=/ Feb 13 10:00:31.565533 kernel: TERM=linux Feb 13 10:00:31.565538 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 10:00:31.565545 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 10:00:31.565553 systemd[1]: Detected architecture x86-64. Feb 13 10:00:31.565559 systemd[1]: Running in initrd. Feb 13 10:00:31.565564 systemd[1]: No hostname configured, using default hostname. Feb 13 10:00:31.565570 systemd[1]: Hostname set to . Feb 13 10:00:31.565575 systemd[1]: Initializing machine ID from random generator. Feb 13 10:00:31.565581 systemd[1]: Queued start job for default target initrd.target. Feb 13 10:00:31.565586 systemd[1]: Started systemd-ask-password-console.path. Feb 13 10:00:31.565591 systemd[1]: Reached target cryptsetup.target. Feb 13 10:00:31.565597 systemd[1]: Reached target paths.target. Feb 13 10:00:31.565603 systemd[1]: Reached target slices.target. Feb 13 10:00:31.565608 systemd[1]: Reached target swap.target. Feb 13 10:00:31.565614 systemd[1]: Reached target timers.target. Feb 13 10:00:31.565619 systemd[1]: Listening on iscsid.socket. Feb 13 10:00:31.565625 systemd[1]: Listening on iscsiuio.socket. Feb 13 10:00:31.565630 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 10:00:31.565636 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 10:00:31.565642 systemd[1]: Listening on systemd-journald.socket. Feb 13 10:00:31.565648 systemd[1]: Listening on systemd-networkd.socket. Feb 13 10:00:31.565653 kernel: tsc: Refined TSC clocksource calibration: 3408.046 MHz Feb 13 10:00:31.565659 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fff667c0, max_idle_ns: 440795358023 ns Feb 13 10:00:31.565664 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 10:00:31.565670 kernel: clocksource: Switched to clocksource tsc Feb 13 10:00:31.565675 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 10:00:31.565680 systemd[1]: Reached target sockets.target. Feb 13 10:00:31.565686 systemd[1]: Starting kmod-static-nodes.service... Feb 13 10:00:31.565692 systemd[1]: Finished network-cleanup.service. Feb 13 10:00:31.565698 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 10:00:31.565703 systemd[1]: Starting systemd-journald.service... Feb 13 10:00:31.565708 systemd[1]: Starting systemd-modules-load.service... Feb 13 10:00:31.565717 systemd-journald[267]: Journal started Feb 13 10:00:31.565744 systemd-journald[267]: Runtime Journal (/run/log/journal/c09023779b654a16b6735c117730fe20) is 8.0M, max 639.3M, 631.3M free. Feb 13 10:00:31.568209 systemd-modules-load[268]: Inserted module 'overlay' Feb 13 10:00:31.573000 audit: BPF prog-id=6 op=LOAD Feb 13 10:00:31.592376 kernel: audit: type=1334 audit(1707818431.573:2): prog-id=6 op=LOAD Feb 13 10:00:31.592389 systemd[1]: Starting systemd-resolved.service... Feb 13 10:00:31.641416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 10:00:31.641431 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 10:00:31.672393 kernel: Bridge firewalling registered Feb 13 10:00:31.672408 systemd[1]: Started systemd-journald.service. Feb 13 10:00:31.687296 systemd-modules-load[268]: Inserted module 'br_netfilter' Feb 13 10:00:31.735643 kernel: audit: type=1130 audit(1707818431.694:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.693303 systemd-resolved[270]: Positive Trust Anchors: Feb 13 10:00:31.811339 kernel: SCSI subsystem initialized Feb 13 10:00:31.811352 kernel: audit: type=1130 audit(1707818431.746:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.811360 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 10:00:31.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.693308 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 10:00:31.912084 kernel: device-mapper: uevent: version 1.0.3 Feb 13 10:00:31.912096 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 10:00:31.912119 kernel: audit: type=1130 audit(1707818431.868:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.693327 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 10:00:32.010597 kernel: audit: type=1130 audit(1707818431.920:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.694862 systemd-resolved[270]: Defaulting to hostname 'linux'. Feb 13 10:00:32.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.695587 systemd[1]: Started systemd-resolved.service. Feb 13 10:00:32.125645 kernel: audit: type=1130 audit(1707818432.018:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:32.125662 kernel: audit: type=1130 audit(1707818432.071:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:32.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:31.747552 systemd[1]: Finished kmod-static-nodes.service. Feb 13 10:00:31.869515 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 10:00:31.912693 systemd-modules-load[268]: Inserted module 'dm_multipath' Feb 13 10:00:31.921712 systemd[1]: Finished systemd-modules-load.service. Feb 13 10:00:32.019739 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 10:00:32.072657 systemd[1]: Reached target nss-lookup.target. Feb 13 10:00:32.135035 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 10:00:32.154946 systemd[1]: Starting systemd-sysctl.service... Feb 13 10:00:32.155292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 10:00:32.158254 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 10:00:32.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:32.158955 systemd[1]: Finished systemd-sysctl.service. Feb 13 10:00:32.207376 kernel: audit: type=1130 audit(1707818432.156:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:32.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:32.220713 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 10:00:32.283471 kernel: audit: type=1130 audit(1707818432.219:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:32.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:32.269684 systemd[1]: Starting dracut-cmdline.service... Feb 13 10:00:32.297480 dracut-cmdline[292]: dracut-dracut-053 Feb 13 10:00:32.297480 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 10:00:32.297480 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 10:00:32.363460 kernel: Loading iSCSI transport class v2.0-870. Feb 13 10:00:32.363474 kernel: iscsi: registered transport (tcp) Feb 13 10:00:32.406332 kernel: iscsi: registered transport (qla4xxx) Feb 13 10:00:32.406349 kernel: QLogic iSCSI HBA Driver Feb 13 10:00:32.422139 systemd[1]: Finished dracut-cmdline.service. Feb 13 10:00:32.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:32.432082 systemd[1]: Starting dracut-pre-udev.service... Feb 13 10:00:32.488446 kernel: raid6: avx2x4 gen() 48364 MB/s Feb 13 10:00:32.524446 kernel: raid6: avx2x4 xor() 16276 MB/s Feb 13 10:00:32.560406 kernel: raid6: avx2x2 gen() 52574 MB/s Feb 13 10:00:32.595406 kernel: raid6: avx2x2 xor() 32093 MB/s Feb 13 10:00:32.630410 kernel: raid6: avx2x1 gen() 45345 MB/s Feb 13 10:00:32.665407 kernel: raid6: avx2x1 xor() 27971 MB/s Feb 13 10:00:32.699437 kernel: raid6: sse2x4 gen() 21386 MB/s Feb 13 10:00:32.733454 kernel: raid6: sse2x4 xor() 11978 MB/s Feb 13 10:00:32.767446 kernel: raid6: sse2x2 gen() 21682 MB/s Feb 13 10:00:32.801446 kernel: raid6: sse2x2 xor() 13459 MB/s Feb 13 10:00:32.835442 kernel: raid6: sse2x1 gen() 18298 MB/s Feb 13 10:00:32.886975 kernel: raid6: sse2x1 xor() 8932 MB/s Feb 13 10:00:32.886991 kernel: raid6: using algorithm avx2x2 gen() 52574 MB/s Feb 13 10:00:32.886999 kernel: raid6: .... xor() 32093 MB/s, rmw enabled Feb 13 10:00:32.905016 kernel: raid6: using avx2x2 recovery algorithm Feb 13 10:00:32.950398 kernel: xor: automatically using best checksumming function avx Feb 13 10:00:33.029383 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 10:00:33.034264 systemd[1]: Finished dracut-pre-udev.service. Feb 13 10:00:33.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:33.042000 audit: BPF prog-id=7 op=LOAD Feb 13 10:00:33.042000 audit: BPF prog-id=8 op=LOAD Feb 13 10:00:33.044315 systemd[1]: Starting systemd-udevd.service... Feb 13 10:00:33.052250 systemd-udevd[473]: Using default interface naming scheme 'v252'. Feb 13 10:00:33.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:33.059655 systemd[1]: Started systemd-udevd.service. Feb 13 10:00:33.099496 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Feb 13 10:00:33.076016 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 10:00:33.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:33.104668 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 10:00:33.116574 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 10:00:33.166069 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 10:00:33.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:33.193382 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 10:00:33.195385 kernel: libata version 3.00 loaded. Feb 13 10:00:33.229969 kernel: ACPI: bus type USB registered Feb 13 10:00:33.230016 kernel: usbcore: registered new interface driver usbfs Feb 13 10:00:33.230031 kernel: usbcore: registered new interface driver hub Feb 13 10:00:33.264907 kernel: usbcore: registered new device driver usb Feb 13 10:00:33.265377 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 10:00:33.298294 kernel: AES CTR mode by8 optimization enabled Feb 13 10:00:33.298376 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 10:00:33.332056 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 10:00:33.338378 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 10:00:33.338469 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 10:00:33.370379 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 13 10:00:33.370449 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 10:00:33.370502 kernel: pps pps0: new PPS source ptp0 Feb 13 10:00:33.370564 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 13 10:00:33.370620 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 10:00:33.370672 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:56 Feb 13 10:00:33.370722 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 13 10:00:33.370772 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 10:00:33.374085 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 10:00:33.407408 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 10:00:33.407477 kernel: pps pps1: new PPS source ptp1 Feb 13 10:00:33.407538 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 13 10:00:33.407596 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 10:00:33.407648 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:57 Feb 13 10:00:33.407697 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 13 10:00:33.407746 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 10:00:33.457075 kernel: scsi host0: ahci Feb 13 10:00:33.457148 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 10:00:33.458417 kernel: scsi host1: ahci Feb 13 10:00:33.458453 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 13 10:00:33.489097 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 10:00:33.489263 kernel: scsi host2: ahci Feb 13 10:00:33.526020 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 10:00:33.539000 kernel: scsi host3: ahci Feb 13 10:00:33.552161 kernel: hub 1-0:1.0: USB hub found Feb 13 10:00:33.552385 kernel: scsi host4: ahci Feb 13 10:00:33.582787 kernel: hub 1-0:1.0: 16 ports detected Feb 13 10:00:33.595688 kernel: scsi host5: ahci Feb 13 10:00:33.620983 kernel: hub 2-0:1.0: USB hub found Feb 13 10:00:33.621164 kernel: scsi host6: ahci Feb 13 10:00:33.621181 kernel: hub 2-0:1.0: 10 ports detected Feb 13 10:00:33.643853 kernel: scsi host7: ahci Feb 13 10:00:33.656316 kernel: usb: port power management may be unreliable Feb 13 10:00:33.672377 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 134 Feb 13 10:00:33.848791 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 134 Feb 13 10:00:33.848828 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 10:00:33.848864 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 134 Feb 13 10:00:33.896816 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 134 Feb 13 10:00:33.896833 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 134 Feb 13 10:00:33.912827 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 134 Feb 13 10:00:33.928801 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 134 Feb 13 10:00:33.944711 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 134 Feb 13 10:00:33.990725 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Feb 13 10:00:33.990800 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 13 10:00:33.990854 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 10:00:34.008382 kernel: hub 1-14:1.0: USB hub found Feb 13 10:00:34.033345 kernel: hub 1-14:1.0: 4 ports detected Feb 13 10:00:34.272407 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 10:00:34.272485 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 10:00:34.298737 kernel: port_module: 8 callbacks suppressed Feb 13 10:00:34.298758 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Feb 13 10:00:34.298823 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 10:00:34.315412 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 10:00:34.315482 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 13 10:00:34.326407 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 10:00:34.376377 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 10:00:34.399427 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 10:00:34.413406 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 10:00:34.429421 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 10:00:34.444446 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 10:00:34.458433 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 10:00:34.489324 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 10:00:34.489340 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 10:00:34.517710 kernel: ata2.00: Features: NCQ-prio Feb 13 10:00:34.555502 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 10:00:34.555519 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 10:00:34.555586 kernel: ata1.00: Features: NCQ-prio Feb 13 10:00:34.569428 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Feb 13 10:00:34.597151 kernel: ata2.00: configured for UDMA/133 Feb 13 10:00:34.597167 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 10:00:34.615413 kernel: ata1.00: configured for UDMA/133 Feb 13 10:00:34.629421 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 10:00:34.647428 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 10:00:34.687762 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 10:00:34.687785 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 10:00:34.736980 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 10:00:34.737110 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 10:00:34.737190 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 10:00:34.737263 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 10:00:34.737317 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 10:00:34.751934 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 10:00:34.767428 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 10:00:34.795183 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 10:00:34.795256 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 10:00:34.809232 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 10:00:34.809307 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 10:00:34.859803 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 10:00:34.873972 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 10:00:34.888086 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 10:00:34.888102 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 10:00:34.889379 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 10:00:34.889457 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 10:00:34.889466 kernel: GPT:9289727 != 937703087 Feb 13 10:00:34.889473 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 10:00:34.889479 kernel: GPT:9289727 != 937703087 Feb 13 10:00:34.889486 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 10:00:34.889492 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 10:00:34.889498 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 10:00:34.889505 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 10:00:34.914504 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 10:00:35.121414 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (517) Feb 13 10:00:35.121430 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 13 10:00:35.121511 kernel: usbcore: registered new interface driver usbhid Feb 13 10:00:35.121520 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 10:00:35.121574 kernel: usbhid: USB HID core driver Feb 13 10:00:35.075986 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 10:00:35.142464 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 10:00:35.187466 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 10:00:35.179434 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 10:00:35.200893 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 10:00:35.346212 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 10:00:35.346346 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 10:00:35.346355 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 10:00:35.346494 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 10:00:35.346503 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 10:00:35.275090 systemd[1]: Starting disk-uuid.service... Feb 13 10:00:35.376478 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 10:00:35.376489 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 10:00:35.376533 disk-uuid[678]: Primary Header is updated. Feb 13 10:00:35.376533 disk-uuid[678]: Secondary Entries is updated. Feb 13 10:00:35.376533 disk-uuid[678]: Secondary Header is updated. Feb 13 10:00:35.467441 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 10:00:35.467463 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 10:00:35.467476 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Feb 13 10:00:35.467552 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 10:00:35.467559 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Feb 13 10:00:36.418318 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 10:00:36.437253 disk-uuid[679]: The operation has completed successfully. Feb 13 10:00:36.445495 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 13 10:00:36.473895 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 10:00:36.569222 kernel: audit: type=1130 audit(1707818436.479:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:36.569238 kernel: audit: type=1131 audit(1707818436.479:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:36.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:36.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:36.473941 systemd[1]: Finished disk-uuid.service. Feb 13 10:00:36.598465 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 10:00:36.481057 systemd[1]: Starting verity-setup.service... Feb 13 10:00:36.639717 systemd[1]: Found device dev-mapper-usr.device. Feb 13 10:00:36.650516 systemd[1]: Mounting sysusr-usr.mount... Feb 13 10:00:36.661970 systemd[1]: Finished verity-setup.service. Feb 13 10:00:36.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:36.729381 kernel: audit: type=1130 audit(1707818436.675:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:36.785050 systemd[1]: Mounted sysusr-usr.mount. Feb 13 10:00:36.799540 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 10:00:36.792647 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 10:00:36.884415 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 10:00:36.884434 kernel: BTRFS info (device sdb6): using free space tree Feb 13 10:00:36.884442 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 10:00:36.884449 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 10:00:36.793046 systemd[1]: Starting ignition-setup.service... Feb 13 10:00:36.812713 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 10:00:36.956392 kernel: audit: type=1130 audit(1707818436.906:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:36.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:36.892816 systemd[1]: Finished ignition-setup.service. Feb 13 10:00:36.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:36.907720 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 10:00:37.045715 kernel: audit: type=1130 audit(1707818436.963:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:37.045728 kernel: audit: type=1334 audit(1707818437.021:24): prog-id=9 op=LOAD Feb 13 10:00:37.021000 audit: BPF prog-id=9 op=LOAD Feb 13 10:00:36.965022 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 10:00:37.023232 systemd[1]: Starting systemd-networkd.service... Feb 13 10:00:37.059794 systemd-networkd[873]: lo: Link UP Feb 13 10:00:37.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:37.059796 systemd-networkd[873]: lo: Gained carrier Feb 13 10:00:37.142479 kernel: audit: type=1130 audit(1707818437.074:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:37.060119 systemd-networkd[873]: Enumeration completed Feb 13 10:00:37.060189 systemd[1]: Started systemd-networkd.service. Feb 13 10:00:37.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:37.060941 systemd-networkd[873]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 10:00:37.224584 kernel: audit: type=1130 audit(1707818437.163:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:37.224596 iscsid[883]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 10:00:37.224596 iscsid[883]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 10:00:37.224596 iscsid[883]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 10:00:37.224596 iscsid[883]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 10:00:37.224596 iscsid[883]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 10:00:37.224596 iscsid[883]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 10:00:37.224596 iscsid[883]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 10:00:37.401526 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 10:00:37.401722 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Feb 13 10:00:37.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:37.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:37.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:37.075468 systemd[1]: Reached target network.target. Feb 13 10:00:37.273160 ignition[870]: Ignition 2.14.0 Feb 13 10:00:37.135006 systemd[1]: Starting iscsiuio.service... Feb 13 10:00:37.273165 ignition[870]: Stage: fetch-offline Feb 13 10:00:37.149643 systemd[1]: Started iscsiuio.service. Feb 13 10:00:37.273191 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 10:00:37.504552 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 10:00:37.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:37.166606 systemd[1]: Starting iscsid.service... Feb 13 10:00:37.273204 ignition[870]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 10:00:37.224549 systemd[1]: Started iscsid.service. Feb 13 10:00:37.281749 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 10:00:37.245875 systemd[1]: Starting dracut-initqueue.service... Feb 13 10:00:37.281810 ignition[870]: parsed url from cmdline: "" Feb 13 10:00:37.263049 systemd-networkd[873]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 10:00:37.281812 ignition[870]: no config URL provided Feb 13 10:00:37.287180 systemd[1]: Finished dracut-initqueue.service. Feb 13 10:00:37.281815 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 10:00:37.292455 unknown[870]: fetched base config from "system" Feb 13 10:00:37.281834 ignition[870]: parsing config with SHA512: b7d8987d92fc4fe843adc4b4ba576f65452f5394d3f219f41ba4de5822615f04b9f956a61f6b9a28da558611901adf35c7b8d3a24926815ac4b90c7a9473680e Feb 13 10:00:37.292459 unknown[870]: fetched user config from "system" Feb 13 10:00:37.292706 ignition[870]: fetch-offline: fetch-offline passed Feb 13 10:00:37.344585 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 10:00:37.292709 ignition[870]: POST message to Packet Timeline Feb 13 10:00:37.363713 systemd[1]: Reached target remote-fs-pre.target. Feb 13 10:00:37.292713 ignition[870]: POST Status error: resource requires networking Feb 13 10:00:37.382595 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 10:00:37.292743 ignition[870]: Ignition finished successfully Feb 13 10:00:37.409592 systemd[1]: Reached target remote-fs.target. Feb 13 10:00:37.451404 ignition[908]: Ignition 2.14.0 Feb 13 10:00:37.425848 systemd[1]: Starting dracut-pre-mount.service... Feb 13 10:00:37.451408 ignition[908]: Stage: kargs Feb 13 10:00:37.444592 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 10:00:37.451489 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 10:00:37.445230 systemd[1]: Starting ignition-kargs.service... Feb 13 10:00:37.451503 ignition[908]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 10:00:37.458693 systemd[1]: Finished dracut-pre-mount.service. Feb 13 10:00:37.454722 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 10:00:37.487989 systemd-networkd[873]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 10:00:37.455603 ignition[908]: kargs: kargs passed Feb 13 10:00:37.516775 systemd-networkd[873]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 10:00:37.455607 ignition[908]: POST message to Packet Timeline Feb 13 10:00:37.546885 systemd-networkd[873]: enp2s0f1np1: Link UP Feb 13 10:00:37.455623 ignition[908]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 10:00:37.546982 systemd-networkd[873]: enp2s0f1np1: Gained carrier Feb 13 10:00:37.457803 ignition[908]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41217->[::1]:53: read: connection refused Feb 13 10:00:37.559684 systemd-networkd[873]: enp2s0f0np0: Link UP Feb 13 10:00:37.657894 ignition[908]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 10:00:37.559793 systemd-networkd[873]: eno2: Link UP Feb 13 10:00:37.659228 ignition[908]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52784->[::1]:53: read: connection refused Feb 13 10:00:37.559891 systemd-networkd[873]: eno1: Link UP Feb 13 10:00:38.060457 ignition[908]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 10:00:38.061504 ignition[908]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39735->[::1]:53: read: connection refused Feb 13 10:00:38.336904 systemd-networkd[873]: enp2s0f0np0: Gained carrier Feb 13 10:00:38.346604 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Feb 13 10:00:38.366582 systemd-networkd[873]: enp2s0f0np0: DHCPv4 address 139.178.70.83/31, gateway 139.178.70.82 acquired from 145.40.83.140 Feb 13 10:00:38.861935 ignition[908]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 10:00:38.863353 ignition[908]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34538->[::1]:53: read: connection refused Feb 13 10:00:38.889663 systemd-networkd[873]: enp2s0f1np1: Gained IPv6LL Feb 13 10:00:39.913842 systemd-networkd[873]: enp2s0f0np0: Gained IPv6LL Feb 13 10:00:40.464537 ignition[908]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 10:00:40.465810 ignition[908]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60618->[::1]:53: read: connection refused Feb 13 10:00:43.668210 ignition[908]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 10:00:43.710056 ignition[908]: GET result: OK Feb 13 10:00:43.937446 ignition[908]: Ignition finished successfully Feb 13 10:00:43.941973 systemd[1]: Finished ignition-kargs.service. Feb 13 10:00:44.028293 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 13 10:00:44.028309 kernel: audit: type=1130 audit(1707818443.951:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:43.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:43.961682 ignition[922]: Ignition 2.14.0 Feb 13 10:00:43.954667 systemd[1]: Starting ignition-disks.service... Feb 13 10:00:43.961686 ignition[922]: Stage: disks Feb 13 10:00:43.961762 ignition[922]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 10:00:43.961771 ignition[922]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 10:00:43.963133 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 10:00:43.964417 ignition[922]: disks: disks passed Feb 13 10:00:43.964421 ignition[922]: POST message to Packet Timeline Feb 13 10:00:43.964432 ignition[922]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 10:00:43.988078 ignition[922]: GET result: OK Feb 13 10:00:44.193971 ignition[922]: Ignition finished successfully Feb 13 10:00:44.197270 systemd[1]: Finished ignition-disks.service. Feb 13 10:00:44.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.209928 systemd[1]: Reached target initrd-root-device.target. Feb 13 10:00:44.288626 kernel: audit: type=1130 audit(1707818444.208:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.273568 systemd[1]: Reached target local-fs-pre.target. Feb 13 10:00:44.273603 systemd[1]: Reached target local-fs.target. Feb 13 10:00:44.297575 systemd[1]: Reached target sysinit.target. Feb 13 10:00:44.311527 systemd[1]: Reached target basic.target. Feb 13 10:00:44.312147 systemd[1]: Starting systemd-fsck-root.service... Feb 13 10:00:44.342414 systemd-fsck[939]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 10:00:44.357038 systemd[1]: Finished systemd-fsck-root.service. Feb 13 10:00:44.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.372998 systemd[1]: Mounting sysroot.mount... Feb 13 10:00:44.446579 kernel: audit: type=1130 audit(1707818444.364:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.446635 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 10:00:44.468941 systemd[1]: Mounted sysroot.mount. Feb 13 10:00:44.476713 systemd[1]: Reached target initrd-root-fs.target. Feb 13 10:00:44.494655 systemd[1]: Mounting sysroot-usr.mount... Feb 13 10:00:44.512455 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 10:00:44.528126 systemd[1]: Starting flatcar-static-network.service... Feb 13 10:00:44.543702 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 10:00:44.543781 systemd[1]: Reached target ignition-diskful.target. Feb 13 10:00:44.562677 systemd[1]: Mounted sysroot-usr.mount. Feb 13 10:00:44.585604 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 10:00:44.716923 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (951) Feb 13 10:00:44.716939 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 10:00:44.716947 kernel: BTRFS info (device sdb6): using free space tree Feb 13 10:00:44.716955 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 10:00:44.716962 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 10:00:44.717024 coreos-metadata[947]: Feb 13 10:00:44.649 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 10:00:44.717024 coreos-metadata[947]: Feb 13 10:00:44.671 INFO Fetch successful Feb 13 10:00:44.840479 kernel: audit: type=1130 audit(1707818444.724:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.840493 kernel: audit: type=1130 audit(1707818444.785:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.840544 coreos-metadata[946]: Feb 13 10:00:44.650 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 10:00:44.840544 coreos-metadata[946]: Feb 13 10:00:44.671 INFO Fetch successful Feb 13 10:00:44.840544 coreos-metadata[946]: Feb 13 10:00:44.689 INFO wrote hostname ci-3510.3.2-a-14c634bc1e to /sysroot/etc/hostname Feb 13 10:00:44.976653 kernel: audit: type=1130 audit(1707818444.847:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.976664 kernel: audit: type=1131 audit(1707818444.847:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.598076 systemd[1]: Starting initrd-setup-root.service... Feb 13 10:00:44.641610 systemd[1]: Finished initrd-setup-root.service. Feb 13 10:00:45.017476 initrd-setup-root[958]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 10:00:45.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:44.747128 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 10:00:45.091589 kernel: audit: type=1130 audit(1707818445.025:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:45.091609 initrd-setup-root[967]: cut: /sysroot/etc/group: No such file or directory Feb 13 10:00:44.786732 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 10:00:45.113556 initrd-setup-root[975]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 10:00:45.123596 ignition[1025]: INFO : Ignition 2.14.0 Feb 13 10:00:45.123596 ignition[1025]: INFO : Stage: mount Feb 13 10:00:45.123596 ignition[1025]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 10:00:45.123596 ignition[1025]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 10:00:45.123596 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 10:00:45.123596 ignition[1025]: INFO : mount: mount passed Feb 13 10:00:45.123596 ignition[1025]: INFO : POST message to Packet Timeline Feb 13 10:00:45.123596 ignition[1025]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 10:00:45.123596 ignition[1025]: INFO : GET result: OK Feb 13 10:00:44.786771 systemd[1]: Finished flatcar-static-network.service. Feb 13 10:00:45.221706 initrd-setup-root[983]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 10:00:44.848658 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 10:00:44.967984 systemd[1]: Starting ignition-mount.service... Feb 13 10:00:44.995960 systemd[1]: Starting sysroot-boot.service... Feb 13 10:00:45.010888 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 13 10:00:45.010929 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 13 10:00:45.013746 systemd[1]: Finished sysroot-boot.service. Feb 13 10:00:45.285815 ignition[1025]: INFO : Ignition finished successfully Feb 13 10:00:45.288428 systemd[1]: Finished ignition-mount.service. Feb 13 10:00:45.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:45.304477 systemd[1]: Starting ignition-files.service... Feb 13 10:00:45.375612 kernel: audit: type=1130 audit(1707818445.301:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:45.369271 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 10:00:45.421476 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1040) Feb 13 10:00:45.421487 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 13 10:00:45.455851 kernel: BTRFS info (device sdb6): using free space tree Feb 13 10:00:45.455866 kernel: BTRFS info (device sdb6): has skinny extents Feb 13 10:00:45.503427 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 13 10:00:45.505349 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 10:00:45.521532 ignition[1059]: INFO : Ignition 2.14.0 Feb 13 10:00:45.521532 ignition[1059]: INFO : Stage: files Feb 13 10:00:45.521532 ignition[1059]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 10:00:45.521532 ignition[1059]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 10:00:45.521532 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 10:00:45.521532 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Feb 13 10:00:45.521532 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 10:00:45.521532 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 10:00:45.523712 unknown[1059]: wrote ssh authorized keys file for user: core Feb 13 10:00:45.623638 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 10:00:45.623638 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 10:00:45.623638 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 10:00:45.623638 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 10:00:45.623638 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 13 10:00:46.013609 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 10:00:46.091993 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 13 10:00:46.091993 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 10:00:46.133659 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 10:00:46.133659 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 13 10:00:46.506311 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 10:00:46.565654 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 13 10:00:46.565654 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 10:00:46.607588 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 10:00:46.607588 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 13 10:00:46.641581 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 10:00:47.019852 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 13 10:00:47.044592 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 10:00:47.044592 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 10:00:47.044592 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 13 10:00:47.092563 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 10:00:47.836879 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 13 10:00:47.836879 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 10:00:47.886630 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1080) Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1226659236" Feb 13 10:00:47.886692 ignition[1059]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1226659236": device or resource busy Feb 13 10:00:47.886692 ignition[1059]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1226659236", trying btrfs: device or resource busy Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1226659236" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1226659236" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1226659236" Feb 13 10:00:47.886692 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1226659236" Feb 13 10:00:48.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.186539 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(f): [started] processing unit "packet-phone-home.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(f): [finished] processing unit "packet-phone-home.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(14): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(14): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(15): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(16): [started] setting preset to enabled for "prepare-critools.service" Feb 13 10:00:48.186539 ignition[1059]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 10:00:48.568688 kernel: audit: type=1130 audit(1707818448.128:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.113902 systemd[1]: Finished ignition-files.service. Feb 13 10:00:48.582943 ignition[1059]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 10:00:48.582943 ignition[1059]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 10:00:48.582943 ignition[1059]: INFO : files: createResultFile: createFiles: op(18): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 10:00:48.582943 ignition[1059]: INFO : files: createResultFile: createFiles: op(18): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 10:00:48.582943 ignition[1059]: INFO : files: files passed Feb 13 10:00:48.582943 ignition[1059]: INFO : POST message to Packet Timeline Feb 13 10:00:48.582943 ignition[1059]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 10:00:48.582943 ignition[1059]: INFO : GET result: OK Feb 13 10:00:48.582943 ignition[1059]: INFO : Ignition finished successfully Feb 13 10:00:48.136109 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 10:00:48.776714 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 10:00:48.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.195618 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 10:00:48.196004 systemd[1]: Starting ignition-quench.service... Feb 13 10:00:48.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.231833 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 10:00:48.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.244917 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 10:00:48.244991 systemd[1]: Finished ignition-quench.service. Feb 13 10:00:48.263839 systemd[1]: Reached target ignition-complete.target. Feb 13 10:00:48.291426 systemd[1]: Starting initrd-parse-etc.service... Feb 13 10:00:48.335314 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 10:00:48.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.335397 systemd[1]: Finished initrd-parse-etc.service. Feb 13 10:00:48.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.352808 systemd[1]: Reached target initrd-fs.target. Feb 13 10:00:49.043986 kernel: kauditd_printk_skb: 12 callbacks suppressed Feb 13 10:00:49.044003 kernel: audit: type=1131 audit(1707818448.960:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.372620 systemd[1]: Reached target initrd.target. Feb 13 10:00:49.057706 ignition[1105]: INFO : Ignition 2.14.0 Feb 13 10:00:49.057706 ignition[1105]: INFO : Stage: umount Feb 13 10:00:49.057706 ignition[1105]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 10:00:49.057706 ignition[1105]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 10:00:49.057706 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 10:00:49.057706 ignition[1105]: INFO : umount: umount passed Feb 13 10:00:49.057706 ignition[1105]: INFO : POST message to Packet Timeline Feb 13 10:00:49.057706 ignition[1105]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 10:00:49.057706 ignition[1105]: INFO : GET result: OK Feb 13 10:00:49.499055 kernel: audit: type=1131 audit(1707818449.092:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.499075 kernel: audit: type=1131 audit(1707818449.158:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.499083 kernel: audit: type=1131 audit(1707818449.224:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.499096 kernel: audit: type=1131 audit(1707818449.290:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.499103 kernel: audit: type=1131 audit(1707818449.355:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.499110 kernel: audit: type=1131 audit(1707818449.438:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.390815 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 10:00:49.567380 kernel: audit: type=1131 audit(1707818449.506:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.567411 iscsid[883]: iscsid shutting down. Feb 13 10:00:49.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.635396 kernel: audit: type=1131 audit(1707818449.574:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.635417 ignition[1105]: INFO : Ignition finished successfully Feb 13 10:00:49.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.392743 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 10:00:49.719668 kernel: audit: type=1131 audit(1707818449.642:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.422587 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 10:00:49.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.439428 systemd[1]: Starting initrd-cleanup.service... Feb 13 10:00:49.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.470318 systemd[1]: Stopped target nss-lookup.target. Feb 13 10:00:48.489947 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 10:00:48.514176 systemd[1]: Stopped target timers.target. Feb 13 10:00:48.534949 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 10:00:48.535300 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 10:00:49.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.556282 systemd[1]: Stopped target initrd.target. Feb 13 10:00:49.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.835000 audit: BPF prog-id=6 op=UNLOAD Feb 13 10:00:48.575994 systemd[1]: Stopped target basic.target. Feb 13 10:00:48.590925 systemd[1]: Stopped target ignition-complete.target. Feb 13 10:00:48.614011 systemd[1]: Stopped target ignition-diskful.target. Feb 13 10:00:49.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.635993 systemd[1]: Stopped target initrd-root-device.target. Feb 13 10:00:49.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.660002 systemd[1]: Stopped target remote-fs.target. Feb 13 10:00:49.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.683996 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 10:00:48.699020 systemd[1]: Stopped target sysinit.target. Feb 13 10:00:49.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.715018 systemd[1]: Stopped target local-fs.target. Feb 13 10:00:48.734008 systemd[1]: Stopped target local-fs-pre.target. Feb 13 10:00:48.749994 systemd[1]: Stopped target swap.target. Feb 13 10:00:49.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.766891 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 10:00:50.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.767247 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 10:00:50.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.786204 systemd[1]: Stopped target cryptsetup.target. Feb 13 10:00:48.808857 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 10:00:50.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.809200 systemd[1]: Stopped dracut-initqueue.service. Feb 13 10:00:50.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.834137 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 10:00:50.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.834510 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 10:00:50.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:50.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:48.849203 systemd[1]: Stopped target paths.target. Feb 13 10:00:48.863821 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 10:00:48.867589 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 10:00:48.880016 systemd[1]: Stopped target slices.target. Feb 13 10:00:48.893957 systemd[1]: Stopped target sockets.target. Feb 13 10:00:48.911013 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 10:00:48.911406 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 10:00:48.930079 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 10:00:48.930429 systemd[1]: Stopped ignition-files.service. Feb 13 10:00:48.947090 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 10:00:48.947457 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 10:00:48.964106 systemd[1]: Stopping ignition-mount.service... Feb 13 10:00:50.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:49.050794 systemd[1]: Stopping iscsid.service... Feb 13 10:00:49.065006 systemd[1]: Stopping sysroot-boot.service... Feb 13 10:00:49.078475 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 10:00:49.078578 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 10:00:49.093705 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 10:00:49.093791 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 10:00:49.160984 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 10:00:49.161365 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 10:00:49.161416 systemd[1]: Stopped iscsid.service. Feb 13 10:00:49.225878 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 10:00:49.225914 systemd[1]: Stopped ignition-mount.service. Feb 13 10:00:49.291824 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 10:00:49.291862 systemd[1]: Stopped sysroot-boot.service. Feb 13 10:00:49.356864 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 10:00:49.356909 systemd[1]: Closed iscsid.socket. Feb 13 10:00:49.421652 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 10:00:49.421690 systemd[1]: Stopped ignition-disks.service. Feb 13 10:00:49.439688 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 10:00:49.439731 systemd[1]: Stopped ignition-kargs.service. Feb 13 10:00:49.507626 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 10:00:49.507681 systemd[1]: Stopped ignition-setup.service. Feb 13 10:00:49.575614 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 10:00:49.575672 systemd[1]: Stopped initrd-setup-root.service. Feb 13 10:00:49.643686 systemd[1]: Stopping iscsiuio.service... Feb 13 10:00:49.711763 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 10:00:49.711805 systemd[1]: Stopped iscsiuio.service. Feb 13 10:00:49.726758 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 10:00:49.726795 systemd[1]: Finished initrd-cleanup.service. Feb 13 10:00:50.293392 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Feb 13 10:00:49.742052 systemd[1]: Stopped target network.target. Feb 13 10:00:49.757546 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 10:00:49.757569 systemd[1]: Closed iscsiuio.socket. Feb 13 10:00:49.771608 systemd[1]: Stopping systemd-networkd.service... Feb 13 10:00:49.785492 systemd-networkd[873]: enp2s0f0np0: DHCPv6 lease lost Feb 13 10:00:49.787636 systemd[1]: Stopping systemd-resolved.service... Feb 13 10:00:49.794541 systemd-networkd[873]: enp2s0f1np1: DHCPv6 lease lost Feb 13 10:00:49.803043 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 10:00:50.292000 audit: BPF prog-id=9 op=UNLOAD Feb 13 10:00:49.803242 systemd[1]: Stopped systemd-resolved.service. Feb 13 10:00:49.819264 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 10:00:49.819496 systemd[1]: Stopped systemd-networkd.service. Feb 13 10:00:49.835104 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 10:00:49.835179 systemd[1]: Closed systemd-networkd.socket. Feb 13 10:00:49.851049 systemd[1]: Stopping network-cleanup.service... Feb 13 10:00:49.865570 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 10:00:49.865733 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 10:00:49.881801 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 10:00:49.881937 systemd[1]: Stopped systemd-sysctl.service. Feb 13 10:00:49.897913 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 10:00:49.898026 systemd[1]: Stopped systemd-modules-load.service. Feb 13 10:00:49.913920 systemd[1]: Stopping systemd-udevd.service... Feb 13 10:00:49.931122 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 10:00:49.932402 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 10:00:49.932686 systemd[1]: Stopped systemd-udevd.service. Feb 13 10:00:49.945989 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 10:00:49.946120 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 10:00:49.958713 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 10:00:49.958804 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 10:00:49.973609 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 10:00:49.973722 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 10:00:49.988809 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 10:00:49.988939 systemd[1]: Stopped dracut-cmdline.service. Feb 13 10:00:50.003689 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 10:00:50.003800 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 10:00:50.021082 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 10:00:50.035462 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 10:00:50.035492 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 13 10:00:50.050536 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 10:00:50.050570 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 10:00:50.066651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 10:00:50.066761 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 10:00:50.083914 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 10:00:50.085088 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 10:00:50.085278 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 10:00:50.196794 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 10:00:50.196999 systemd[1]: Stopped network-cleanup.service. Feb 13 10:00:50.206927 systemd[1]: Reached target initrd-switch-root.target. Feb 13 10:00:50.223081 systemd[1]: Starting initrd-switch-root.service... Feb 13 10:00:50.247632 systemd[1]: Switching root. Feb 13 10:00:50.294601 systemd-journald[267]: Journal stopped Feb 13 10:00:54.170254 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 10:00:54.170269 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 10:00:54.170277 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 10:00:54.170282 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 10:00:54.170287 kernel: SELinux: policy capability open_perms=1 Feb 13 10:00:54.170292 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 10:00:54.170298 kernel: SELinux: policy capability always_check_network=0 Feb 13 10:00:54.170304 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 10:00:54.170310 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 10:00:54.170315 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 10:00:54.170320 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 10:00:54.170326 systemd[1]: Successfully loaded SELinux policy in 321.900ms. Feb 13 10:00:54.170332 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.628ms. Feb 13 10:00:54.170339 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 10:00:54.170347 systemd[1]: Detected architecture x86-64. Feb 13 10:00:54.170353 systemd[1]: Detected first boot. Feb 13 10:00:54.170358 systemd[1]: Hostname set to . Feb 13 10:00:54.170365 systemd[1]: Initializing machine ID from random generator. Feb 13 10:00:54.170370 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 10:00:54.170380 systemd[1]: Populated /etc with preset unit settings. Feb 13 10:00:54.170387 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 10:00:54.170414 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 10:00:54.170421 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 10:00:54.170443 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 10:00:54.170448 systemd[1]: Stopped initrd-switch-root.service. Feb 13 10:00:54.170454 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 10:00:54.170462 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 10:00:54.170468 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 10:00:54.170474 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 10:00:54.170480 systemd[1]: Created slice system-getty.slice. Feb 13 10:00:54.170486 systemd[1]: Created slice system-modprobe.slice. Feb 13 10:00:54.170492 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 10:00:54.170498 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 10:00:54.170504 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 10:00:54.170511 systemd[1]: Created slice user.slice. Feb 13 10:00:54.170517 systemd[1]: Started systemd-ask-password-console.path. Feb 13 10:00:54.170524 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 10:00:54.170530 systemd[1]: Set up automount boot.automount. Feb 13 10:00:54.170536 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 10:00:54.170542 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 10:00:54.170550 systemd[1]: Stopped target initrd-fs.target. Feb 13 10:00:54.170556 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 10:00:54.170562 systemd[1]: Reached target integritysetup.target. Feb 13 10:00:54.170570 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 10:00:54.170576 systemd[1]: Reached target remote-fs.target. Feb 13 10:00:54.170582 systemd[1]: Reached target slices.target. Feb 13 10:00:54.170588 systemd[1]: Reached target swap.target. Feb 13 10:00:54.170594 systemd[1]: Reached target torcx.target. Feb 13 10:00:54.170601 systemd[1]: Reached target veritysetup.target. Feb 13 10:00:54.170607 systemd[1]: Listening on systemd-coredump.socket. Feb 13 10:00:54.170613 systemd[1]: Listening on systemd-initctl.socket. Feb 13 10:00:54.170620 systemd[1]: Listening on systemd-networkd.socket. Feb 13 10:00:54.170627 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 10:00:54.170634 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 10:00:54.170640 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 10:00:54.170647 systemd[1]: Mounting dev-hugepages.mount... Feb 13 10:00:54.170654 systemd[1]: Mounting dev-mqueue.mount... Feb 13 10:00:54.170660 systemd[1]: Mounting media.mount... Feb 13 10:00:54.170667 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 10:00:54.170673 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 10:00:54.170679 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 10:00:54.170686 systemd[1]: Mounting tmp.mount... Feb 13 10:00:54.170692 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 10:00:54.170699 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 10:00:54.170706 systemd[1]: Starting kmod-static-nodes.service... Feb 13 10:00:54.170712 systemd[1]: Starting modprobe@configfs.service... Feb 13 10:00:54.170719 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 10:00:54.170726 systemd[1]: Starting modprobe@drm.service... Feb 13 10:00:54.170732 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 10:00:54.170739 systemd[1]: Starting modprobe@fuse.service... Feb 13 10:00:54.170745 kernel: fuse: init (API version 7.34) Feb 13 10:00:54.170751 systemd[1]: Starting modprobe@loop.service... Feb 13 10:00:54.170757 kernel: loop: module loaded Feb 13 10:00:54.170765 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 10:00:54.170771 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 10:00:54.170778 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 10:00:54.170784 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 10:00:54.170791 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 10:00:54.170797 systemd[1]: Stopped systemd-journald.service. Feb 13 10:00:54.170803 kernel: kauditd_printk_skb: 61 callbacks suppressed Feb 13 10:00:54.170809 kernel: audit: type=1130 audit(1707818453.971:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.170816 kernel: audit: type=1131 audit(1707818453.971:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.170822 kernel: audit: type=1334 audit(1707818454.021:119): prog-id=21 op=LOAD Feb 13 10:00:54.170828 kernel: audit: type=1334 audit(1707818454.077:120): prog-id=22 op=LOAD Feb 13 10:00:54.170834 kernel: audit: type=1334 audit(1707818454.093:121): prog-id=23 op=LOAD Feb 13 10:00:54.170840 kernel: audit: type=1334 audit(1707818454.109:122): prog-id=19 op=UNLOAD Feb 13 10:00:54.170845 systemd[1]: Starting systemd-journald.service... Feb 13 10:00:54.170852 kernel: audit: type=1334 audit(1707818454.109:123): prog-id=20 op=UNLOAD Feb 13 10:00:54.170858 systemd[1]: Starting systemd-modules-load.service... Feb 13 10:00:54.170865 kernel: audit: type=1305 audit(1707818454.165:124): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 10:00:54.170873 systemd-journald[1257]: Journal started Feb 13 10:00:54.170897 systemd-journald[1257]: Runtime Journal (/run/log/journal/b9075a4a36424f7e92b7d414fa7154b2) is 8.0M, max 639.3M, 631.3M free. Feb 13 10:00:50.712000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 10:00:50.980000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 10:00:50.982000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 10:00:50.982000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 10:00:50.982000 audit: BPF prog-id=10 op=LOAD Feb 13 10:00:50.982000 audit: BPF prog-id=10 op=UNLOAD Feb 13 10:00:50.982000 audit: BPF prog-id=11 op=LOAD Feb 13 10:00:50.982000 audit: BPF prog-id=11 op=UNLOAD Feb 13 10:00:51.078000 audit[1146]: AVC avc: denied { associate } for pid=1146 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 10:00:51.078000 audit[1146]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78dc a1=c00002ce58 a2=c00002bb00 a3=32 items=0 ppid=1129 pid=1146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:00:51.078000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 10:00:51.111000 audit[1146]: AVC avc: denied { associate } for pid=1146 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 10:00:51.111000 audit[1146]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79b5 a2=1ed a3=0 items=2 ppid=1129 pid=1146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:00:51.111000 audit: CWD cwd="/" Feb 13 10:00:51.111000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:51.111000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:51.111000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 10:00:52.657000 audit: BPF prog-id=12 op=LOAD Feb 13 10:00:52.657000 audit: BPF prog-id=3 op=UNLOAD Feb 13 10:00:52.657000 audit: BPF prog-id=13 op=LOAD Feb 13 10:00:52.657000 audit: BPF prog-id=14 op=LOAD Feb 13 10:00:52.657000 audit: BPF prog-id=4 op=UNLOAD Feb 13 10:00:52.657000 audit: BPF prog-id=5 op=UNLOAD Feb 13 10:00:52.658000 audit: BPF prog-id=15 op=LOAD Feb 13 10:00:52.658000 audit: BPF prog-id=12 op=UNLOAD Feb 13 10:00:52.658000 audit: BPF prog-id=16 op=LOAD Feb 13 10:00:52.658000 audit: BPF prog-id=17 op=LOAD Feb 13 10:00:52.658000 audit: BPF prog-id=13 op=UNLOAD Feb 13 10:00:52.658000 audit: BPF prog-id=14 op=UNLOAD Feb 13 10:00:52.659000 audit: BPF prog-id=18 op=LOAD Feb 13 10:00:52.659000 audit: BPF prog-id=15 op=UNLOAD Feb 13 10:00:52.659000 audit: BPF prog-id=19 op=LOAD Feb 13 10:00:52.659000 audit: BPF prog-id=20 op=LOAD Feb 13 10:00:52.659000 audit: BPF prog-id=16 op=UNLOAD Feb 13 10:00:52.659000 audit: BPF prog-id=17 op=UNLOAD Feb 13 10:00:52.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:52.712000 audit: BPF prog-id=18 op=UNLOAD Feb 13 10:00:52.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:52.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:53.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:53.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:53.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:53.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.021000 audit: BPF prog-id=21 op=LOAD Feb 13 10:00:54.077000 audit: BPF prog-id=22 op=LOAD Feb 13 10:00:54.093000 audit: BPF prog-id=23 op=LOAD Feb 13 10:00:54.109000 audit: BPF prog-id=19 op=UNLOAD Feb 13 10:00:54.109000 audit: BPF prog-id=20 op=UNLOAD Feb 13 10:00:54.165000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 10:00:51.075416 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 10:00:52.657043 systemd[1]: Queued start job for default target multi-user.target. Feb 13 10:00:51.076152 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 10:00:52.660813 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 10:00:51.076193 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 10:00:51.076247 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 10:00:51.076268 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 10:00:51.076324 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 10:00:51.076347 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 10:00:51.076815 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 10:00:51.076922 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 10:00:51.076951 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 10:00:51.077901 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 10:00:51.077967 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 10:00:51.078004 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 10:00:51.078032 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 10:00:51.078064 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 10:00:51.078090 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 10:00:52.309568 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 10:00:52.309715 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 10:00:52.309774 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 10:00:52.309867 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 10:00:52.309897 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 10:00:52.309930 /usr/lib/systemd/system-generators/torcx-generator[1146]: time="2024-02-13T10:00:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 10:00:54.165000 audit[1257]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd5b3d1ae0 a2=4000 a3=7ffd5b3d1b7c items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:00:54.281827 kernel: audit: type=1300 audit(1707818454.165:124): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd5b3d1ae0 a2=4000 a3=7ffd5b3d1b7c items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:00:54.281846 systemd[1]: Starting systemd-network-generator.service... Feb 13 10:00:54.281856 kernel: audit: type=1327 audit(1707818454.165:124): proctitle="/usr/lib/systemd/systemd-journald" Feb 13 10:00:54.165000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 10:00:54.348565 systemd[1]: Starting systemd-remount-fs.service... Feb 13 10:00:54.373414 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 10:00:54.415424 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 10:00:54.415448 systemd[1]: Stopped verity-setup.service. Feb 13 10:00:54.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.460412 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 10:00:54.479556 systemd[1]: Started systemd-journald.service. Feb 13 10:00:54.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.487904 systemd[1]: Mounted dev-hugepages.mount. Feb 13 10:00:54.495727 systemd[1]: Mounted dev-mqueue.mount. Feb 13 10:00:54.502640 systemd[1]: Mounted media.mount. Feb 13 10:00:54.509647 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 10:00:54.518630 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 10:00:54.527609 systemd[1]: Mounted tmp.mount. Feb 13 10:00:54.534691 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 10:00:54.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.542721 systemd[1]: Finished kmod-static-nodes.service. Feb 13 10:00:54.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.551768 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 10:00:54.551881 systemd[1]: Finished modprobe@configfs.service. Feb 13 10:00:54.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.560813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 10:00:54.560938 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 10:00:54.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.569899 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 10:00:54.570068 systemd[1]: Finished modprobe@drm.service. Feb 13 10:00:54.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.580158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 10:00:54.580447 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 10:00:54.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.590209 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 10:00:54.590533 systemd[1]: Finished modprobe@fuse.service. Feb 13 10:00:54.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.600181 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 10:00:54.600507 systemd[1]: Finished modprobe@loop.service. Feb 13 10:00:54.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.610187 systemd[1]: Finished systemd-modules-load.service. Feb 13 10:00:54.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.620251 systemd[1]: Finished systemd-network-generator.service. Feb 13 10:00:54.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.629302 systemd[1]: Finished systemd-remount-fs.service. Feb 13 10:00:54.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.638182 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 10:00:54.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.647771 systemd[1]: Reached target network-pre.target. Feb 13 10:00:54.660191 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 10:00:54.672010 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 10:00:54.679657 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 10:00:54.682883 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 10:00:54.690018 systemd[1]: Starting systemd-journal-flush.service... Feb 13 10:00:54.693173 systemd-journald[1257]: Time spent on flushing to /var/log/journal/b9075a4a36424f7e92b7d414fa7154b2 is 14.885ms for 1640 entries. Feb 13 10:00:54.693173 systemd-journald[1257]: System Journal (/var/log/journal/b9075a4a36424f7e92b7d414fa7154b2) is 8.0M, max 195.6M, 187.6M free. Feb 13 10:00:54.732185 systemd-journald[1257]: Received client request to flush runtime journal. Feb 13 10:00:54.706485 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 10:00:54.706981 systemd[1]: Starting systemd-random-seed.service... Feb 13 10:00:54.721503 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 10:00:54.721989 systemd[1]: Starting systemd-sysctl.service... Feb 13 10:00:54.729095 systemd[1]: Starting systemd-sysusers.service... Feb 13 10:00:54.735970 systemd[1]: Starting systemd-udev-settle.service... Feb 13 10:00:54.743545 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 10:00:54.751558 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 10:00:54.759590 systemd[1]: Finished systemd-journal-flush.service. Feb 13 10:00:54.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.767598 systemd[1]: Finished systemd-random-seed.service. Feb 13 10:00:54.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.775581 systemd[1]: Finished systemd-sysctl.service. Feb 13 10:00:54.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.783569 systemd[1]: Finished systemd-sysusers.service. Feb 13 10:00:54.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.792631 systemd[1]: Reached target first-boot-complete.target. Feb 13 10:00:54.801125 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 10:00:54.810420 udevadm[1273]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 10:00:54.820907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 10:00:54.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.988250 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 10:00:54.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:54.995000 audit: BPF prog-id=24 op=LOAD Feb 13 10:00:54.996000 audit: BPF prog-id=25 op=LOAD Feb 13 10:00:54.996000 audit: BPF prog-id=7 op=UNLOAD Feb 13 10:00:54.996000 audit: BPF prog-id=8 op=UNLOAD Feb 13 10:00:54.997711 systemd[1]: Starting systemd-udevd.service... Feb 13 10:00:55.009440 systemd-udevd[1276]: Using default interface naming scheme 'v252'. Feb 13 10:00:55.027652 systemd[1]: Started systemd-udevd.service. Feb 13 10:00:55.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:55.038766 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 10:00:55.038000 audit: BPF prog-id=26 op=LOAD Feb 13 10:00:55.040080 systemd[1]: Starting systemd-networkd.service... Feb 13 10:00:55.080119 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 10:00:55.080180 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 10:00:55.100809 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 10:00:55.120383 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 10:00:55.120433 kernel: ACPI: button: Power Button [PWRF] Feb 13 10:00:55.118000 audit: BPF prog-id=27 op=LOAD Feb 13 10:00:55.135000 audit: BPF prog-id=28 op=LOAD Feb 13 10:00:55.136000 audit: BPF prog-id=29 op=LOAD Feb 13 10:00:55.137941 systemd[1]: Starting systemd-userdbd.service... Feb 13 10:00:55.175393 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1333) Feb 13 10:00:55.189913 systemd[1]: Started systemd-userdbd.service. Feb 13 10:00:55.200379 kernel: IPMI message handler: version 39.2 Feb 13 10:00:55.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:55.210292 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 10:00:55.236382 kernel: ipmi device interface Feb 13 10:00:55.236474 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 10:00:55.136000 audit[1280]: AVC avc: denied { confidentiality } for pid=1280 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 10:00:55.275380 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 10:00:55.296378 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 10:00:55.136000 audit[1280]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d92a468ab0 a1=4d8bc a2=7f975c9b0bc5 a3=5 items=42 ppid=1276 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:00:55.136000 audit: CWD cwd="/" Feb 13 10:00:55.136000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=1 name=(null) inode=26902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=2 name=(null) inode=26902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=3 name=(null) inode=26903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=4 name=(null) inode=26902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=5 name=(null) inode=26904 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=6 name=(null) inode=26902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=7 name=(null) inode=26905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=8 name=(null) inode=26905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=9 name=(null) inode=26906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=10 name=(null) inode=26905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=11 name=(null) inode=26907 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=12 name=(null) inode=26905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=13 name=(null) inode=26908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=14 name=(null) inode=26905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=15 name=(null) inode=26909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=16 name=(null) inode=26905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=17 name=(null) inode=26910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=18 name=(null) inode=26902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=19 name=(null) inode=26911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=20 name=(null) inode=26911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=21 name=(null) inode=26912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=22 name=(null) inode=26911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=23 name=(null) inode=26913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=24 name=(null) inode=26911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=25 name=(null) inode=26914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=26 name=(null) inode=26911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=27 name=(null) inode=26915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=28 name=(null) inode=26911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=29 name=(null) inode=26916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=30 name=(null) inode=26902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=31 name=(null) inode=26917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=32 name=(null) inode=26917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=33 name=(null) inode=26918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=34 name=(null) inode=26917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=35 name=(null) inode=26919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=36 name=(null) inode=26917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=37 name=(null) inode=26920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=38 name=(null) inode=26917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=39 name=(null) inode=26921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=40 name=(null) inode=26917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PATH item=41 name=(null) inode=26922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 10:00:55.136000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 10:00:55.338123 kernel: ipmi_si: IPMI System Interface driver Feb 13 10:00:55.338150 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 10:00:55.338230 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 10:00:55.378079 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 10:00:55.378183 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 10:00:55.397378 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 10:00:55.415376 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 10:00:55.437379 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 10:00:55.437480 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 10:00:55.504414 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 13 10:00:55.504514 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 10:00:55.520965 systemd-networkd[1331]: bond0: netdev ready Feb 13 10:00:55.522906 systemd-networkd[1331]: lo: Link UP Feb 13 10:00:55.522909 systemd-networkd[1331]: lo: Gained carrier Feb 13 10:00:55.523363 systemd-networkd[1331]: Enumeration completed Feb 13 10:00:55.523424 systemd[1]: Started systemd-networkd.service. Feb 13 10:00:55.523635 systemd-networkd[1331]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 10:00:55.524283 systemd-networkd[1331]: enp2s0f1np1: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:7b.network. Feb 13 10:00:55.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:55.544953 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 10:00:55.544989 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 10:00:55.630953 kernel: intel_rapl_common: Found RAPL domain package Feb 13 10:00:55.631021 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 10:00:55.631130 kernel: intel_rapl_common: Found RAPL domain core Feb 13 10:00:55.672379 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 13 10:00:55.672564 kernel: intel_rapl_common: Found RAPL domain uncore Feb 13 10:00:55.672581 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 10:00:55.674420 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 13 10:00:55.711008 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 10:00:55.712140 systemd-networkd[1331]: enp2s0f0np0: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:7a.network. Feb 13 10:00:55.733378 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 10:00:55.733461 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 10:00:55.819379 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 10:00:55.883418 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 10:00:55.906416 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 13 10:00:55.906474 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 10:00:55.928414 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 10:00:55.954688 systemd-networkd[1331]: bond0: Link UP Feb 13 10:00:55.954899 systemd-networkd[1331]: enp2s0f1np1: Link UP Feb 13 10:00:55.955042 systemd-networkd[1331]: enp2s0f1np1: Gained carrier Feb 13 10:00:55.956112 systemd-networkd[1331]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:ce:f6:07:a6:7a.network. Feb 13 10:00:55.994468 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 10:00:55.994490 kernel: bond0: active interface up! Feb 13 10:00:56.016073 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 10:00:56.035711 systemd[1]: Finished systemd-udev-settle.service. Feb 13 10:00:56.055442 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 10:00:56.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:56.064242 systemd[1]: Starting lvm2-activation-early.service... Feb 13 10:00:56.079353 lvm[1382]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 10:00:56.111745 systemd[1]: Finished lvm2-activation-early.service. Feb 13 10:00:56.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:56.119522 systemd[1]: Reached target cryptsetup.target. Feb 13 10:00:56.138047 systemd[1]: Starting lvm2-activation.service... Feb 13 10:00:56.140241 lvm[1383]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 10:00:56.145436 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.170439 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.194434 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.217436 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.219924 systemd[1]: Finished lvm2-activation.service. Feb 13 10:00:56.240425 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:56.258543 systemd[1]: Reached target local-fs-pre.target. Feb 13 10:00:56.263416 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.279484 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 10:00:56.279498 systemd[1]: Reached target local-fs.target. Feb 13 10:00:56.285412 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.302455 systemd[1]: Reached target machines.target. Feb 13 10:00:56.308426 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.325098 systemd[1]: Starting ldconfig.service... Feb 13 10:00:56.330419 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.345872 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 10:00:56.345893 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 10:00:56.346425 systemd[1]: Starting systemd-boot-update.service... Feb 13 10:00:56.353377 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.368915 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 10:00:56.375377 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.375923 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 10:00:56.375995 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 10:00:56.376017 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 10:00:56.376503 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 10:00:56.395433 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.407434 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 10:00:56.416449 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.420255 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 10:00:56.428738 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 10:00:56.434601 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 10:00:56.437454 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:56.455545 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1385 (bootctl) Feb 13 10:00:56.456101 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 10:00:56.459469 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.480439 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.499427 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.500674 systemd-networkd[1331]: enp2s0f0np0: Link UP Feb 13 10:00:56.500837 systemd-networkd[1331]: bond0: Gained carrier Feb 13 10:00:56.500922 systemd-networkd[1331]: enp2s0f0np0: Gained carrier Feb 13 10:00:56.532826 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 10:00:56.532854 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Feb 13 10:00:56.547322 systemd-networkd[1331]: enp2s0f1np1: Link DOWN Feb 13 10:00:56.547333 systemd-networkd[1331]: enp2s0f1np1: Lost carrier Feb 13 10:00:56.726381 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 10:00:56.744377 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Feb 13 10:00:56.744448 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Feb 13 10:00:56.744927 systemd-networkd[1331]: enp2s0f1np1: Link UP Feb 13 10:00:56.745118 systemd-networkd[1331]: enp2s0f1np1: Gained carrier Feb 13 10:00:56.778308 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 10:00:56.778386 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 10:00:56.778641 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 10:00:56.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:56.781627 systemd-fsck[1393]: fsck.fat 4.2 (2021-01-31) Feb 13 10:00:56.781627 systemd-fsck[1393]: /dev/sdb1: 789 files, 115339/258078 clusters Feb 13 10:00:56.786791 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 10:00:56.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:56.798164 systemd[1]: Mounting boot.mount... Feb 13 10:00:56.808848 systemd[1]: Mounted boot.mount. Feb 13 10:00:56.827548 systemd[1]: Finished systemd-boot-update.service. Feb 13 10:00:56.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:56.855424 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 10:00:56.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:00:56.864213 systemd[1]: Starting audit-rules.service... Feb 13 10:00:56.871001 systemd[1]: Starting clean-ca-certificates.service... Feb 13 10:00:56.879983 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 10:00:56.884000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 10:00:56.884000 audit[1413]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff45b697a0 a2=420 a3=0 items=0 ppid=1396 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:00:56.884000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 10:00:56.886043 augenrules[1413]: No rules Feb 13 10:00:56.889355 systemd[1]: Starting systemd-resolved.service... Feb 13 10:00:56.897285 systemd[1]: Starting systemd-timesyncd.service... Feb 13 10:00:56.905935 systemd[1]: Starting systemd-update-utmp.service... Feb 13 10:00:56.912728 systemd[1]: Finished audit-rules.service. Feb 13 10:00:56.919588 systemd[1]: Finished clean-ca-certificates.service. Feb 13 10:00:56.927564 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 10:00:56.938601 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 10:00:56.939150 systemd[1]: Finished systemd-update-utmp.service. Feb 13 10:00:56.943748 ldconfig[1384]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 10:00:56.947581 systemd[1]: Finished ldconfig.service. Feb 13 10:00:56.955067 systemd[1]: Starting systemd-update-done.service... Feb 13 10:00:56.961627 systemd[1]: Finished systemd-update-done.service. Feb 13 10:00:56.966563 systemd-resolved[1419]: Positive Trust Anchors: Feb 13 10:00:56.966568 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 10:00:56.966587 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 10:00:56.969564 systemd[1]: Started systemd-timesyncd.service. Feb 13 10:00:56.970269 systemd-resolved[1419]: Using system hostname 'ci-3510.3.2-a-14c634bc1e'. Feb 13 10:00:56.977533 systemd[1]: Started systemd-resolved.service. Feb 13 10:00:56.985512 systemd[1]: Reached target network.target. Feb 13 10:00:56.993448 systemd[1]: Reached target nss-lookup.target. Feb 13 10:00:57.001459 systemd[1]: Reached target sysinit.target. Feb 13 10:00:57.009485 systemd[1]: Started motdgen.path. Feb 13 10:00:57.016462 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 10:00:57.026450 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 10:00:57.034442 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 10:00:57.034456 systemd[1]: Reached target paths.target. Feb 13 10:00:57.041441 systemd[1]: Reached target time-set.target. Feb 13 10:00:57.049510 systemd[1]: Started logrotate.timer. Feb 13 10:00:57.056493 systemd[1]: Started mdadm.timer. Feb 13 10:00:57.063440 systemd[1]: Reached target timers.target. Feb 13 10:00:57.070561 systemd[1]: Listening on dbus.socket. Feb 13 10:00:57.077973 systemd[1]: Starting docker.socket... Feb 13 10:00:57.085873 systemd[1]: Listening on sshd.socket. Feb 13 10:00:57.092507 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 10:00:57.092718 systemd[1]: Listening on docker.socket. Feb 13 10:00:57.099492 systemd[1]: Reached target sockets.target. Feb 13 10:00:57.107450 systemd[1]: Reached target basic.target. Feb 13 10:00:57.114479 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 10:00:57.114493 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 10:00:57.114932 systemd[1]: Starting containerd.service... Feb 13 10:00:57.121862 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 10:00:57.130976 systemd[1]: Starting coreos-metadata.service... Feb 13 10:00:57.137941 systemd[1]: Starting dbus.service... Feb 13 10:00:57.144070 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 10:00:57.149172 jq[1434]: false Feb 13 10:00:57.150970 coreos-metadata[1427]: Feb 13 10:00:57.150 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 10:00:57.151071 systemd[1]: Starting extend-filesystems.service... Feb 13 10:00:57.156518 dbus-daemon[1433]: [system] SELinux support is enabled Feb 13 10:00:57.158492 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 10:00:57.159288 systemd[1]: Starting motdgen.service... Feb 13 10:00:57.159558 extend-filesystems[1435]: Found sda Feb 13 10:00:57.159558 extend-filesystems[1435]: Found sdb Feb 13 10:00:57.186726 extend-filesystems[1435]: Found sdb1 Feb 13 10:00:57.186726 extend-filesystems[1435]: Found sdb2 Feb 13 10:00:57.186726 extend-filesystems[1435]: Found sdb3 Feb 13 10:00:57.186726 extend-filesystems[1435]: Found usr Feb 13 10:00:57.186726 extend-filesystems[1435]: Found sdb4 Feb 13 10:00:57.186726 extend-filesystems[1435]: Found sdb6 Feb 13 10:00:57.186726 extend-filesystems[1435]: Found sdb7 Feb 13 10:00:57.186726 extend-filesystems[1435]: Found sdb9 Feb 13 10:00:57.186726 extend-filesystems[1435]: Checking size of /dev/sdb9 Feb 13 10:00:57.186726 extend-filesystems[1435]: Resized partition /dev/sdb9 Feb 13 10:00:57.280493 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 13 10:00:57.280552 coreos-metadata[1430]: Feb 13 10:00:57.162 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 10:00:57.167108 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 10:00:57.280770 extend-filesystems[1451]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 10:00:57.199117 systemd[1]: Starting prepare-critools.service... Feb 13 10:00:57.218022 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 10:00:57.236947 systemd[1]: Starting sshd-keygen.service... Feb 13 10:00:57.255672 systemd[1]: Starting systemd-logind.service... Feb 13 10:00:57.272449 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 10:00:57.272961 systemd[1]: Starting tcsd.service... Feb 13 10:00:57.277535 systemd-logind[1463]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 10:00:57.277545 systemd-logind[1463]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 10:00:57.277554 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 10:00:57.277701 systemd-logind[1463]: New seat seat0. Feb 13 10:00:57.292651 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 10:00:57.293024 systemd[1]: Starting update-engine.service... Feb 13 10:00:57.308085 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 10:00:57.309568 jq[1466]: true Feb 13 10:00:57.316773 systemd[1]: Started dbus.service. Feb 13 10:00:57.325363 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 10:00:57.325472 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 10:00:57.325695 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 10:00:57.325803 systemd[1]: Finished motdgen.service. Feb 13 10:00:57.333204 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 10:00:57.333286 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 10:00:57.337572 update_engine[1465]: I0213 10:00:57.337054 1465 main.cc:92] Flatcar Update Engine starting Feb 13 10:00:57.339239 tar[1468]: ./ Feb 13 10:00:57.339239 tar[1468]: ./macvlan Feb 13 10:00:57.340443 update_engine[1465]: I0213 10:00:57.340389 1465 update_check_scheduler.cc:74] Next update check in 7m17s Feb 13 10:00:57.343957 jq[1472]: true Feb 13 10:00:57.345135 dbus-daemon[1433]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 10:00:57.345625 tar[1469]: crictl Feb 13 10:00:57.350186 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 10:00:57.350314 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 10:00:57.351609 systemd[1]: Started update-engine.service. Feb 13 10:00:57.353485 env[1473]: time="2024-02-13T10:00:57.353461454Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 10:00:57.360445 tar[1468]: ./static Feb 13 10:00:57.362166 env[1473]: time="2024-02-13T10:00:57.362150247Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 10:00:57.363396 env[1473]: time="2024-02-13T10:00:57.363381552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 10:00:57.363474 systemd[1]: Started systemd-logind.service. Feb 13 10:00:57.364070 env[1473]: time="2024-02-13T10:00:57.364050307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 10:00:57.364102 env[1473]: time="2024-02-13T10:00:57.364069012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 10:00:57.365585 env[1473]: time="2024-02-13T10:00:57.365572176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 10:00:57.365620 env[1473]: time="2024-02-13T10:00:57.365585082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 10:00:57.365620 env[1473]: time="2024-02-13T10:00:57.365593395Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 10:00:57.365620 env[1473]: time="2024-02-13T10:00:57.365599144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 10:00:57.365665 env[1473]: time="2024-02-13T10:00:57.365645808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 10:00:57.365816 env[1473]: time="2024-02-13T10:00:57.365806648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 10:00:57.365890 env[1473]: time="2024-02-13T10:00:57.365879023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 10:00:57.365890 env[1473]: time="2024-02-13T10:00:57.365888887Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 10:00:57.367680 env[1473]: time="2024-02-13T10:00:57.367666100Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 10:00:57.367706 env[1473]: time="2024-02-13T10:00:57.367681095Z" level=info msg="metadata content store policy set" policy=shared Feb 13 10:00:57.369954 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Feb 13 10:00:57.371631 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 10:00:57.374107 env[1473]: time="2024-02-13T10:00:57.374095485Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 10:00:57.374135 env[1473]: time="2024-02-13T10:00:57.374110509Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 10:00:57.374135 env[1473]: time="2024-02-13T10:00:57.374118869Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 10:00:57.374171 env[1473]: time="2024-02-13T10:00:57.374139600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 10:00:57.374171 env[1473]: time="2024-02-13T10:00:57.374148686Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 10:00:57.374171 env[1473]: time="2024-02-13T10:00:57.374156899Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 10:00:57.374171 env[1473]: time="2024-02-13T10:00:57.374163531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 10:00:57.374240 env[1473]: time="2024-02-13T10:00:57.374170981Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 10:00:57.374240 env[1473]: time="2024-02-13T10:00:57.374177963Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 10:00:57.374240 env[1473]: time="2024-02-13T10:00:57.374184955Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 10:00:57.374240 env[1473]: time="2024-02-13T10:00:57.374191280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 10:00:57.374240 env[1473]: time="2024-02-13T10:00:57.374198731Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 10:00:57.374320 env[1473]: time="2024-02-13T10:00:57.374248093Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 10:00:57.374320 env[1473]: time="2024-02-13T10:00:57.374300310Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 10:00:57.374801 env[1473]: time="2024-02-13T10:00:57.374733910Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 10:00:57.374828 env[1473]: time="2024-02-13T10:00:57.374813049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.374845 env[1473]: time="2024-02-13T10:00:57.374824412Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 10:00:57.374872 env[1473]: time="2024-02-13T10:00:57.374861831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.374890 env[1473]: time="2024-02-13T10:00:57.374877103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.374890 env[1473]: time="2024-02-13T10:00:57.374885901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.374920 env[1473]: time="2024-02-13T10:00:57.374892051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.374920 env[1473]: time="2024-02-13T10:00:57.374902619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.374920 env[1473]: time="2024-02-13T10:00:57.374912831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.374964 env[1473]: time="2024-02-13T10:00:57.374919366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.374964 env[1473]: time="2024-02-13T10:00:57.374925739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.374964 env[1473]: time="2024-02-13T10:00:57.374933397Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 10:00:57.375011 env[1473]: time="2024-02-13T10:00:57.375006208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.375029 env[1473]: time="2024-02-13T10:00:57.375015374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.375029 env[1473]: time="2024-02-13T10:00:57.375025217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.375060 env[1473]: time="2024-02-13T10:00:57.375033362Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 10:00:57.375060 env[1473]: time="2024-02-13T10:00:57.375044734Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 10:00:57.375060 env[1473]: time="2024-02-13T10:00:57.375053857Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 10:00:57.375103 env[1473]: time="2024-02-13T10:00:57.375063922Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 10:00:57.375103 env[1473]: time="2024-02-13T10:00:57.375083707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 10:00:57.375228 env[1473]: time="2024-02-13T10:00:57.375201992Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375235848Z" level=info msg="Connect containerd service" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375258578Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375560235Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375647661Z" level=info msg="Start subscribing containerd event" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375676763Z" level=info msg="Start recovering state" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375682696Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375707684Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375729365Z" level=info msg="containerd successfully booted in 0.022610s" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375728289Z" level=info msg="Start event monitor" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375750479Z" level=info msg="Start snapshots syncer" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375760662Z" level=info msg="Start cni network conf syncer for default" Feb 13 10:00:57.377215 env[1473]: time="2024-02-13T10:00:57.375768145Z" level=info msg="Start streaming server" Feb 13 10:00:57.381513 systemd[1]: Started containerd.service. Feb 13 10:00:57.382204 tar[1468]: ./vlan Feb 13 10:00:57.390617 systemd[1]: Started locksmithd.service. Feb 13 10:00:57.397527 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 10:00:57.397650 systemd[1]: Reached target system-config.target. Feb 13 10:00:57.402871 tar[1468]: ./portmap Feb 13 10:00:57.405500 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 10:00:57.405607 systemd[1]: Reached target user-config.target. Feb 13 10:00:57.422634 tar[1468]: ./host-local Feb 13 10:00:57.439403 tar[1468]: ./vrf Feb 13 10:00:57.447915 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 10:00:57.457580 tar[1468]: ./bridge Feb 13 10:00:57.479298 tar[1468]: ./tuning Feb 13 10:00:57.496653 tar[1468]: ./firewall Feb 13 10:00:57.513442 systemd-networkd[1331]: bond0: Gained IPv6LL Feb 13 10:00:57.519040 tar[1468]: ./host-device Feb 13 10:00:57.538626 tar[1468]: ./sbr Feb 13 10:00:57.556538 tar[1468]: ./loopback Feb 13 10:00:57.573529 tar[1468]: ./dhcp Feb 13 10:00:57.599486 systemd[1]: Finished prepare-critools.service. Feb 13 10:00:57.622982 tar[1468]: ./ptp Feb 13 10:00:57.644053 tar[1468]: ./ipvlan Feb 13 10:00:57.664381 tar[1468]: ./bandwidth Feb 13 10:00:57.667408 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 13 10:00:57.695389 extend-filesystems[1451]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 13 10:00:57.695389 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 10:00:57.695389 extend-filesystems[1451]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 13 10:00:57.724595 extend-filesystems[1435]: Resized filesystem in /dev/sdb9 Feb 13 10:00:57.695971 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 10:00:57.696089 systemd[1]: Finished extend-filesystems.service. Feb 13 10:00:57.720012 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 10:00:57.792712 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 10:00:57.804179 systemd[1]: Finished sshd-keygen.service. Feb 13 10:00:57.812389 systemd[1]: Starting issuegen.service... Feb 13 10:00:57.819663 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 10:00:57.819735 systemd[1]: Finished issuegen.service. Feb 13 10:00:57.827182 systemd[1]: Starting systemd-user-sessions.service... Feb 13 10:00:57.835781 systemd[1]: Finished systemd-user-sessions.service. Feb 13 10:00:57.845293 systemd[1]: Started getty@tty1.service. Feb 13 10:00:57.854260 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 10:00:57.863645 systemd[1]: Reached target getty.target. Feb 13 10:00:57.957447 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 10:00:58.041544 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:1 Feb 13 10:00:59.008422 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 13 10:01:02.891523 login[1534]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 10:01:02.900728 login[1533]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 10:01:02.904235 systemd-logind[1463]: New session 1 of user core. Feb 13 10:01:02.905054 systemd[1]: Created slice user-500.slice. Feb 13 10:01:02.905914 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 10:01:02.907840 systemd-logind[1463]: New session 2 of user core. Feb 13 10:01:02.913438 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 10:01:02.914462 systemd[1]: Starting user@500.service... Feb 13 10:01:02.916971 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 10:01:03.016741 systemd[1538]: Queued start job for default target default.target. Feb 13 10:01:03.016966 systemd[1538]: Reached target paths.target. Feb 13 10:01:03.016977 systemd[1538]: Reached target sockets.target. Feb 13 10:01:03.016985 systemd[1538]: Reached target timers.target. Feb 13 10:01:03.016992 systemd[1538]: Reached target basic.target. Feb 13 10:01:03.017011 systemd[1538]: Reached target default.target. Feb 13 10:01:03.017025 systemd[1538]: Startup finished in 95ms. Feb 13 10:01:03.017074 systemd[1]: Started user@500.service. Feb 13 10:01:03.017620 systemd[1]: Started session-1.scope. Feb 13 10:01:03.017975 systemd[1]: Started session-2.scope. Feb 13 10:01:03.146616 coreos-metadata[1430]: Feb 13 10:01:03.146 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 10:01:03.147344 coreos-metadata[1427]: Feb 13 10:01:03.146 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 10:01:04.103285 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 13 10:01:04.103457 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 13 10:01:04.146828 coreos-metadata[1427]: Feb 13 10:01:04.146 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 10:01:04.147081 coreos-metadata[1430]: Feb 13 10:01:04.146 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 10:01:04.193961 coreos-metadata[1430]: Feb 13 10:01:04.193 INFO Fetch successful Feb 13 10:01:04.194744 coreos-metadata[1427]: Feb 13 10:01:04.194 INFO Fetch successful Feb 13 10:01:04.215604 systemd[1]: Finished coreos-metadata.service. Feb 13 10:01:04.216427 systemd[1]: Started packet-phone-home.service. Feb 13 10:01:04.217336 unknown[1427]: wrote ssh authorized keys file for user: core Feb 13 10:01:04.222084 curl[1560]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 10:01:04.222226 curl[1560]: Dload Upload Total Spent Left Speed Feb 13 10:01:04.229601 update-ssh-keys[1561]: Updated "/home/core/.ssh/authorized_keys" Feb 13 10:01:04.229786 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 10:01:04.230070 systemd[1]: Reached target multi-user.target. Feb 13 10:01:04.230714 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 10:01:04.234590 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 10:01:04.234663 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 10:01:04.234787 systemd[1]: Startup finished in 2.019s (kernel) + 19.537s (initrd) + 13.864s (userspace) = 35.421s. Feb 13 10:01:05.004237 systemd[1]: Created slice system-sshd.slice. Feb 13 10:01:05.004881 systemd[1]: Started sshd@0-139.178.70.83:22-139.178.68.195:51844.service. Feb 13 10:01:05.047971 sshd[1564]: Accepted publickey for core from 139.178.68.195 port 51844 ssh2: RSA SHA256:wM1bdaCPwerSW1mOnJZTsZDRswKX2qe3WXCkDWmUy9w Feb 13 10:01:05.049203 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 10:01:05.053360 systemd-logind[1463]: New session 3 of user core. Feb 13 10:01:05.054528 systemd[1]: Started session-3.scope. Feb 13 10:01:05.110813 systemd[1]: Started sshd@1-139.178.70.83:22-139.178.68.195:51848.service. Feb 13 10:01:05.141780 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 51848 ssh2: RSA SHA256:wM1bdaCPwerSW1mOnJZTsZDRswKX2qe3WXCkDWmUy9w Feb 13 10:01:05.142432 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 10:01:05.144785 systemd-logind[1463]: New session 4 of user core. Feb 13 10:01:05.145212 systemd[1]: Started session-4.scope. Feb 13 10:01:05.196670 sshd[1569]: pam_unix(sshd:session): session closed for user core Feb 13 10:01:05.199223 systemd[1]: sshd@1-139.178.70.83:22-139.178.68.195:51848.service: Deactivated successfully. Feb 13 10:01:05.199884 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 10:01:05.200594 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Feb 13 10:01:05.201528 systemd[1]: Started sshd@2-139.178.70.83:22-139.178.68.195:51862.service. Feb 13 10:01:05.202302 systemd-logind[1463]: Removed session 4. Feb 13 10:01:05.236449 sshd[1575]: Accepted publickey for core from 139.178.68.195 port 51862 ssh2: RSA SHA256:wM1bdaCPwerSW1mOnJZTsZDRswKX2qe3WXCkDWmUy9w Feb 13 10:01:05.237254 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 10:01:05.240181 systemd-logind[1463]: New session 5 of user core. Feb 13 10:01:05.240758 systemd[1]: Started session-5.scope. Feb 13 10:01:05.292507 sshd[1575]: pam_unix(sshd:session): session closed for user core Feb 13 10:01:05.294101 systemd[1]: sshd@2-139.178.70.83:22-139.178.68.195:51862.service: Deactivated successfully. Feb 13 10:01:05.294396 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 10:01:05.294800 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Feb 13 10:01:05.295252 systemd[1]: Started sshd@3-139.178.70.83:22-139.178.68.195:51868.service. Feb 13 10:01:05.295727 systemd-logind[1463]: Removed session 5. Feb 13 10:01:05.327385 sshd[1582]: Accepted publickey for core from 139.178.68.195 port 51868 ssh2: RSA SHA256:wM1bdaCPwerSW1mOnJZTsZDRswKX2qe3WXCkDWmUy9w Feb 13 10:01:05.328438 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 10:01:05.332185 systemd-logind[1463]: New session 6 of user core. Feb 13 10:01:05.333037 systemd[1]: Started session-6.scope. Feb 13 10:01:05.399505 sshd[1582]: pam_unix(sshd:session): session closed for user core Feb 13 10:01:05.406018 systemd[1]: sshd@3-139.178.70.83:22-139.178.68.195:51868.service: Deactivated successfully. Feb 13 10:01:05.407585 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 10:01:05.409313 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Feb 13 10:01:05.411843 systemd[1]: Started sshd@4-139.178.70.83:22-139.178.68.195:51870.service. Feb 13 10:01:05.414263 systemd-logind[1463]: Removed session 6. Feb 13 10:01:05.446986 sshd[1588]: Accepted publickey for core from 139.178.68.195 port 51870 ssh2: RSA SHA256:wM1bdaCPwerSW1mOnJZTsZDRswKX2qe3WXCkDWmUy9w Feb 13 10:01:05.447781 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 10:01:05.450701 systemd-logind[1463]: New session 7 of user core. Feb 13 10:01:05.451243 systemd[1]: Started session-7.scope. Feb 13 10:01:05.526457 curl[1560]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 Feb 13 10:01:05.528847 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 10:01:05.536845 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 10:01:05.537459 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 10:01:05.562002 dbus-daemon[1433]: \xd0-1\xea\x8dU: received setenforce notice (enforcing=1391930608) Feb 13 10:01:05.566835 sudo[1591]: pam_unix(sudo:session): session closed for user root Feb 13 10:01:05.572064 sshd[1588]: pam_unix(sshd:session): session closed for user core Feb 13 10:01:05.578980 systemd[1]: sshd@4-139.178.70.83:22-139.178.68.195:51870.service: Deactivated successfully. Feb 13 10:01:05.579868 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 10:01:05.580233 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Feb 13 10:01:05.580761 systemd[1]: Started sshd@5-139.178.70.83:22-139.178.68.195:51886.service. Feb 13 10:01:05.581138 systemd-logind[1463]: Removed session 7. Feb 13 10:01:05.612871 sshd[1595]: Accepted publickey for core from 139.178.68.195 port 51886 ssh2: RSA SHA256:wM1bdaCPwerSW1mOnJZTsZDRswKX2qe3WXCkDWmUy9w Feb 13 10:01:05.613901 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 10:01:05.617324 systemd-logind[1463]: New session 8 of user core. Feb 13 10:01:05.618170 systemd[1]: Started session-8.scope. Feb 13 10:01:05.677192 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 10:01:05.677298 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 10:01:05.679077 sudo[1599]: pam_unix(sudo:session): session closed for user root Feb 13 10:01:05.681292 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 10:01:05.681397 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 10:01:05.686462 systemd[1]: Stopping audit-rules.service... Feb 13 10:01:05.685000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 13 10:01:05.687229 auditctl[1602]: No rules Feb 13 10:01:05.687582 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 10:01:05.687750 systemd[1]: Stopped audit-rules.service. Feb 13 10:01:05.689358 systemd[1]: Starting audit-rules.service... Feb 13 10:01:05.692649 kernel: kauditd_printk_skb: 94 callbacks suppressed Feb 13 10:01:05.692687 kernel: audit: type=1305 audit(1707818465.685:172): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 13 10:01:05.699323 augenrules[1619]: No rules Feb 13 10:01:05.699782 systemd[1]: Finished audit-rules.service. Feb 13 10:01:05.700345 sudo[1598]: pam_unix(sudo:session): session closed for user root Feb 13 10:01:05.701367 sshd[1595]: pam_unix(sshd:session): session closed for user core Feb 13 10:01:05.703349 systemd[1]: Started sshd@6-139.178.70.83:22-139.178.68.195:51892.service. Feb 13 10:01:05.703672 systemd[1]: sshd@5-139.178.70.83:22-139.178.68.195:51886.service: Deactivated successfully. Feb 13 10:01:05.704034 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 10:01:05.704340 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Feb 13 10:01:05.705029 systemd-logind[1463]: Removed session 8. Feb 13 10:01:05.685000 audit[1602]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd62e6d1c0 a2=420 a3=0 items=0 ppid=1 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:05.708436 kernel: audit: type=1300 audit(1707818465.685:172): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd62e6d1c0 a2=420 a3=0 items=0 ppid=1 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:05.685000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 13 10:01:05.748811 kernel: audit: type=1327 audit(1707818465.685:172): proctitle=2F7362696E2F617564697463746C002D44 Feb 13 10:01:05.748848 kernel: audit: type=1131 audit(1707818465.686:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.771225 kernel: audit: type=1130 audit(1707818465.698:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.793658 kernel: audit: type=1106 audit(1707818465.698:175): pid=1598 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.698000 audit[1598]: USER_END pid=1598 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.797035 sshd[1624]: Accepted publickey for core from 139.178.68.195 port 51892 ssh2: RSA SHA256:wM1bdaCPwerSW1mOnJZTsZDRswKX2qe3WXCkDWmUy9w Feb 13 10:01:05.798678 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 10:01:05.800989 systemd-logind[1463]: New session 9 of user core. Feb 13 10:01:05.801401 systemd[1]: Started session-9.scope. Feb 13 10:01:05.819626 kernel: audit: type=1104 audit(1707818465.699:176): pid=1598 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.699000 audit[1598]: CRED_DISP pid=1598 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.843158 kernel: audit: type=1106 audit(1707818465.700:177): pid=1595 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:05.700000 audit[1595]: USER_END pid=1595 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:05.847689 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 10:01:05.847796 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 10:01:05.875366 kernel: audit: type=1104 audit(1707818465.700:178): pid=1595 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:05.700000 audit[1595]: CRED_DISP pid=1595 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:05.901335 kernel: audit: type=1130 audit(1707818465.702:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.83:22-139.178.68.195:51892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.83:22-139.178.68.195:51892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.83:22-139.178.68.195:51886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.795000 audit[1624]: USER_ACCT pid=1624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:05.797000 audit[1624]: CRED_ACQ pid=1624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:05.797000 audit[1624]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf9293200 a2=3 a3=0 items=0 ppid=1 pid=1624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:05.797000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 13 10:01:05.802000 audit[1624]: USER_START pid=1624 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:05.802000 audit[1627]: CRED_ACQ pid=1627 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:05.846000 audit[1628]: USER_ACCT pid=1628 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.846000 audit[1628]: CRED_REFR pid=1628 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 10:01:05.847000 audit[1628]: USER_START pid=1628 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 10:01:08.118788 systemd[1]: Reloading. Feb 13 10:01:08.135361 /usr/lib/systemd/system-generators/torcx-generator[1660]: time="2024-02-13T10:01:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 10:01:08.135386 /usr/lib/systemd/system-generators/torcx-generator[1660]: time="2024-02-13T10:01:08Z" level=info msg="torcx already run" Feb 13 10:01:08.187129 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 10:01:08.187137 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 10:01:08.199461 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit: BPF prog-id=37 op=LOAD Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.240000 audit: BPF prog-id=38 op=LOAD Feb 13 10:01:08.240000 audit: BPF prog-id=24 op=UNLOAD Feb 13 10:01:08.240000 audit: BPF prog-id=25 op=UNLOAD Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit: BPF prog-id=39 op=LOAD Feb 13 10:01:08.241000 audit: BPF prog-id=35 op=UNLOAD Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.241000 audit: BPF prog-id=40 op=LOAD Feb 13 10:01:08.241000 audit: BPF prog-id=30 op=UNLOAD Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit: BPF prog-id=41 op=LOAD Feb 13 10:01:08.242000 audit: BPF prog-id=21 op=UNLOAD Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit: BPF prog-id=42 op=LOAD Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit: BPF prog-id=43 op=LOAD Feb 13 10:01:08.243000 audit: BPF prog-id=22 op=UNLOAD Feb 13 10:01:08.243000 audit: BPF prog-id=23 op=UNLOAD Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit: BPF prog-id=44 op=LOAD Feb 13 10:01:08.243000 audit: BPF prog-id=32 op=UNLOAD Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit: BPF prog-id=45 op=LOAD Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit: BPF prog-id=46 op=LOAD Feb 13 10:01:08.244000 audit: BPF prog-id=33 op=UNLOAD Feb 13 10:01:08.244000 audit: BPF prog-id=34 op=UNLOAD Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit: BPF prog-id=47 op=LOAD Feb 13 10:01:08.244000 audit: BPF prog-id=27 op=UNLOAD Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit: BPF prog-id=48 op=LOAD Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit: BPF prog-id=49 op=LOAD Feb 13 10:01:08.245000 audit: BPF prog-id=28 op=UNLOAD Feb 13 10:01:08.245000 audit: BPF prog-id=29 op=UNLOAD Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit: BPF prog-id=50 op=LOAD Feb 13 10:01:08.246000 audit: BPF prog-id=31 op=UNLOAD Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.246000 audit: BPF prog-id=51 op=LOAD Feb 13 10:01:08.246000 audit: BPF prog-id=26 op=UNLOAD Feb 13 10:01:08.251732 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 10:01:08.255320 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 10:01:08.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:08.256525 systemd[1]: Reached target network-online.target. Feb 13 10:01:08.257211 systemd[1]: Started kubelet.service. Feb 13 10:01:08.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:08.284707 kubelet[1717]: E0213 10:01:08.284650 1717 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 13 10:01:08.285889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 10:01:08.285959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 10:01:08.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 13 10:01:08.711261 systemd[1]: Stopped kubelet.service. Feb 13 10:01:08.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:08.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:08.728662 systemd[1]: Reloading. Feb 13 10:01:08.746835 /usr/lib/systemd/system-generators/torcx-generator[1814]: time="2024-02-13T10:01:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 10:01:08.746860 /usr/lib/systemd/system-generators/torcx-generator[1814]: time="2024-02-13T10:01:08Z" level=info msg="torcx already run" Feb 13 10:01:08.797748 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 10:01:08.797757 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 10:01:08.810758 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit: BPF prog-id=52 op=LOAD Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.851000 audit: BPF prog-id=53 op=LOAD Feb 13 10:01:08.851000 audit: BPF prog-id=37 op=UNLOAD Feb 13 10:01:08.851000 audit: BPF prog-id=38 op=UNLOAD Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit: BPF prog-id=54 op=LOAD Feb 13 10:01:08.852000 audit: BPF prog-id=39 op=UNLOAD Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.852000 audit: BPF prog-id=55 op=LOAD Feb 13 10:01:08.852000 audit: BPF prog-id=40 op=UNLOAD Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit: BPF prog-id=56 op=LOAD Feb 13 10:01:08.853000 audit: BPF prog-id=41 op=UNLOAD Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit: BPF prog-id=57 op=LOAD Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.853000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit: BPF prog-id=58 op=LOAD Feb 13 10:01:08.854000 audit: BPF prog-id=42 op=UNLOAD Feb 13 10:01:08.854000 audit: BPF prog-id=43 op=UNLOAD Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.854000 audit: BPF prog-id=59 op=LOAD Feb 13 10:01:08.855000 audit: BPF prog-id=44 op=UNLOAD Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit: BPF prog-id=60 op=LOAD Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit: BPF prog-id=61 op=LOAD Feb 13 10:01:08.855000 audit: BPF prog-id=45 op=UNLOAD Feb 13 10:01:08.855000 audit: BPF prog-id=46 op=UNLOAD Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit: BPF prog-id=62 op=LOAD Feb 13 10:01:08.855000 audit: BPF prog-id=47 op=UNLOAD Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit: BPF prog-id=63 op=LOAD Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.855000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit: BPF prog-id=64 op=LOAD Feb 13 10:01:08.856000 audit: BPF prog-id=48 op=UNLOAD Feb 13 10:01:08.856000 audit: BPF prog-id=49 op=UNLOAD Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.856000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit: BPF prog-id=65 op=LOAD Feb 13 10:01:08.857000 audit: BPF prog-id=50 op=UNLOAD Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:08.857000 audit: BPF prog-id=66 op=LOAD Feb 13 10:01:08.857000 audit: BPF prog-id=51 op=UNLOAD Feb 13 10:01:08.865227 systemd[1]: Started kubelet.service. Feb 13 10:01:08.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:08.886088 kubelet[1873]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 10:01:08.886088 kubelet[1873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 10:01:08.886088 kubelet[1873]: I0213 10:01:08.886086 1873 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 10:01:08.886865 kubelet[1873]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 10:01:08.886865 kubelet[1873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 10:01:09.055621 kubelet[1873]: I0213 10:01:09.055542 1873 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 13 10:01:09.055621 kubelet[1873]: I0213 10:01:09.055558 1873 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 10:01:09.055706 kubelet[1873]: I0213 10:01:09.055701 1873 server.go:836] "Client rotation is on, will bootstrap in background" Feb 13 10:01:09.056835 kubelet[1873]: I0213 10:01:09.056826 1873 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 10:01:09.093602 kubelet[1873]: I0213 10:01:09.093546 1873 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 10:01:09.094042 kubelet[1873]: I0213 10:01:09.093973 1873 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 10:01:09.094204 kubelet[1873]: I0213 10:01:09.094119 1873 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 10:01:09.094204 kubelet[1873]: I0213 10:01:09.094163 1873 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 10:01:09.094204 kubelet[1873]: I0213 10:01:09.094193 1873 container_manager_linux.go:308] "Creating device plugin manager" Feb 13 10:01:09.094710 kubelet[1873]: I0213 10:01:09.094402 1873 state_mem.go:36] "Initialized new in-memory state store" Feb 13 10:01:09.100353 kubelet[1873]: I0213 10:01:09.100312 1873 kubelet.go:398] "Attempting to sync node with API server" Feb 13 10:01:09.100597 kubelet[1873]: I0213 10:01:09.100369 1873 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 10:01:09.100597 kubelet[1873]: I0213 10:01:09.100466 1873 kubelet.go:297] "Adding apiserver pod source" Feb 13 10:01:09.100597 kubelet[1873]: I0213 10:01:09.100529 1873 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 10:01:09.100597 kubelet[1873]: E0213 10:01:09.100549 1873 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:09.101151 kubelet[1873]: E0213 10:01:09.100610 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:09.102029 kubelet[1873]: I0213 10:01:09.101983 1873 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 10:01:09.102643 kubelet[1873]: W0213 10:01:09.102602 1873 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 10:01:09.103581 kubelet[1873]: I0213 10:01:09.103539 1873 server.go:1186] "Started kubelet" Feb 13 10:01:09.103965 kubelet[1873]: I0213 10:01:09.103917 1873 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 10:01:09.104492 kubelet[1873]: E0213 10:01:09.104362 1873 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 10:01:09.104645 kubelet[1873]: E0213 10:01:09.104520 1873 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 10:01:09.104000 audit[1873]: AVC avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:09.104000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 13 10:01:09.104000 audit[1873]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001194a20 a1=c0011ac6c0 a2=c0011949f0 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.104000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 13 10:01:09.104000 audit[1873]: AVC avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:09.104000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 13 10:01:09.104000 audit[1873]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000154ce0 a1=c0011ac6d8 a2=c001194ab0 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.104000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 13 10:01:09.107470 kubelet[1873]: I0213 10:01:09.106018 1873 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 13 10:01:09.107470 kubelet[1873]: I0213 10:01:09.106110 1873 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 13 10:01:09.107470 kubelet[1873]: I0213 10:01:09.106244 1873 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 10:01:09.107470 kubelet[1873]: I0213 10:01:09.106432 1873 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 13 10:01:09.107470 kubelet[1873]: I0213 10:01:09.106580 1873 server.go:451] "Adding debug handlers to kubelet server" Feb 13 10:01:09.107470 kubelet[1873]: I0213 10:01:09.106588 1873 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 10:01:09.112904 kubelet[1873]: E0213 10:01:09.112890 1873 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.83\" not found" node="10.67.80.83" Feb 13 10:01:09.119637 kubelet[1873]: I0213 10:01:09.119627 1873 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 10:01:09.119637 kubelet[1873]: I0213 10:01:09.119635 1873 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 10:01:09.119738 kubelet[1873]: I0213 10:01:09.119645 1873 state_mem.go:36] "Initialized new in-memory state store" Feb 13 10:01:09.120740 kubelet[1873]: I0213 10:01:09.120709 1873 policy_none.go:49] "None policy: Start" Feb 13 10:01:09.121058 kubelet[1873]: I0213 10:01:09.121050 1873 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 10:01:09.121104 kubelet[1873]: I0213 10:01:09.121074 1873 state_mem.go:35] "Initializing new in-memory state store" Feb 13 10:01:09.123609 systemd[1]: Created slice kubepods.slice. Feb 13 10:01:09.125830 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 10:01:09.125000 audit[1898]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.125000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe14c1fee0 a2=0 a3=7ffe14c1fecc items=0 ppid=1873 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 13 10:01:09.125000 audit[1901]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.125000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd93211930 a2=0 a3=7ffd9321191c items=0 ppid=1873 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 13 10:01:09.127273 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 10:01:09.142024 kubelet[1873]: I0213 10:01:09.141985 1873 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 10:01:09.142024 kubelet[1873]: I0213 10:01:09.142017 1873 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 13 10:01:09.140000 audit[1873]: AVC avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:09.140000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 13 10:01:09.140000 audit[1873]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0002cd770 a1=c000f2ac30 a2=c0002cd740 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.140000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 13 10:01:09.142265 kubelet[1873]: I0213 10:01:09.142201 1873 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 10:01:09.142437 kubelet[1873]: E0213 10:01:09.142389 1873 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.83\" not found" Feb 13 10:01:09.126000 audit[1903]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.126000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffe2870110 a2=0 a3=7fffe28700fc items=0 ppid=1873 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.126000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 13 10:01:09.152000 audit[1908]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.152000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff2f4f26b0 a2=0 a3=7fff2f4f269c items=0 ppid=1873 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 13 10:01:09.184000 audit[1913]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.184000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff49a65af0 a2=0 a3=7fff49a65adc items=0 ppid=1873 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.184000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 13 10:01:09.185000 audit[1914]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.185000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdb85ee070 a2=0 a3=7ffdb85ee05c items=0 ppid=1873 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 13 10:01:09.188000 audit[1917]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.188000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe300937a0 a2=0 a3=7ffe3009378c items=0 ppid=1873 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 13 10:01:09.191000 audit[1920]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.191000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff8f501b40 a2=0 a3=7fff8f501b2c items=0 ppid=1873 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 13 10:01:09.191000 audit[1921]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.191000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3b5ba860 a2=0 a3=7ffe3b5ba84c items=0 ppid=1873 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 13 10:01:09.192000 audit[1922]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.192000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe91f53550 a2=0 a3=7ffe91f5353c items=0 ppid=1873 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 13 10:01:09.194000 audit[1924]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.194000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc9b2028c0 a2=0 a3=7ffc9b2028ac items=0 ppid=1873 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 13 10:01:09.207407 kubelet[1873]: I0213 10:01:09.207394 1873 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.83" Feb 13 10:01:09.195000 audit[1926]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.195000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffde14503d0 a2=0 a3=7ffde14503bc items=0 ppid=1873 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 13 10:01:09.256000 audit[1929]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.256000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc2b3f11a0 a2=0 a3=7ffc2b3f118c items=0 ppid=1873 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 13 10:01:09.259000 audit[1931]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.259000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe9bc0eea0 a2=0 a3=7ffe9bc0ee8c items=0 ppid=1873 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 13 10:01:09.266000 audit[1934]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.266000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe341f3e30 a2=0 a3=7ffe341f3e1c items=0 ppid=1873 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 13 10:01:09.268286 kubelet[1873]: I0213 10:01:09.268275 1873 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 10:01:09.267000 audit[1935]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.267000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe9b450f30 a2=0 a3=7ffe9b450f1c items=0 ppid=1873 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.267000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 13 10:01:09.267000 audit[1936]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.267000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe869b00b0 a2=0 a3=7ffe869b009c items=0 ppid=1873 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 13 10:01:09.268000 audit[1937]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.268000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe709ae700 a2=0 a3=7ffe709ae6ec items=0 ppid=1873 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 13 10:01:09.268000 audit[1938]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.268000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3b4e8a70 a2=0 a3=7ffd3b4e8a5c items=0 ppid=1873 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 13 10:01:09.269000 audit[1940]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:09.269000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb02ed7e0 a2=0 a3=7ffdb02ed7cc items=0 ppid=1873 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 13 10:01:09.269000 audit[1941]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.269000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff50e38110 a2=0 a3=7fff50e380fc items=0 ppid=1873 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 13 10:01:09.270000 audit[1942]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.270000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc8d385aa0 a2=0 a3=7ffc8d385a8c items=0 ppid=1873 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 13 10:01:09.271000 audit[1944]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.271000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffec0459880 a2=0 a3=7ffec045986c items=0 ppid=1873 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 13 10:01:09.272000 audit[1945]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.272000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc26a15d0 a2=0 a3=7ffcc26a15bc items=0 ppid=1873 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 13 10:01:09.273000 audit[1946]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.273000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb8084370 a2=0 a3=7ffeb808435c items=0 ppid=1873 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 13 10:01:09.274000 audit[1948]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.274000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc7ad6f0d0 a2=0 a3=7ffc7ad6f0bc items=0 ppid=1873 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 13 10:01:09.276000 audit[1950]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.276000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc5b86f6f0 a2=0 a3=7ffc5b86f6dc items=0 ppid=1873 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 13 10:01:09.277000 audit[1952]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.277000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffcf879e030 a2=0 a3=7ffcf879e01c items=0 ppid=1873 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 13 10:01:09.279000 audit[1954]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.279000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc590de0a0 a2=0 a3=7ffc590de08c items=0 ppid=1873 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 13 10:01:09.281000 audit[1956]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.281000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fffc29d7a30 a2=0 a3=7fffc29d7a1c items=0 ppid=1873 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 13 10:01:09.283124 kubelet[1873]: I0213 10:01:09.283089 1873 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 10:01:09.283124 kubelet[1873]: I0213 10:01:09.283100 1873 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 13 10:01:09.283124 kubelet[1873]: I0213 10:01:09.283114 1873 kubelet.go:2113] "Starting kubelet main sync loop" Feb 13 10:01:09.283203 kubelet[1873]: E0213 10:01:09.283142 1873 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 10:01:09.282000 audit[1957]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.282000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1de4a8a0 a2=0 a3=7fff1de4a88c items=0 ppid=1873 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 13 10:01:09.282000 audit[1958]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.282000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2de5e960 a2=0 a3=7fff2de5e94c items=0 ppid=1873 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 13 10:01:09.283000 audit[1959]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:09.283000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffff8084b0 a2=0 a3=7fffff80849c items=0 ppid=1873 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:09.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 13 10:01:09.308109 kubelet[1873]: I0213 10:01:09.307946 1873 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.83" Feb 13 10:01:09.322346 kubelet[1873]: I0213 10:01:09.322310 1873 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 10:01:09.323069 env[1473]: time="2024-02-13T10:01:09.322984341Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 10:01:09.323790 kubelet[1873]: I0213 10:01:09.323394 1873 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 10:01:10.101312 kubelet[1873]: E0213 10:01:10.101241 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:10.102181 kubelet[1873]: I0213 10:01:10.101437 1873 apiserver.go:52] "Watching apiserver" Feb 13 10:01:10.304405 kubelet[1873]: I0213 10:01:10.304339 1873 topology_manager.go:210] "Topology Admit Handler" Feb 13 10:01:10.304680 kubelet[1873]: I0213 10:01:10.304555 1873 topology_manager.go:210] "Topology Admit Handler" Feb 13 10:01:10.304680 kubelet[1873]: I0213 10:01:10.304651 1873 topology_manager.go:210] "Topology Admit Handler" Feb 13 10:01:10.305101 kubelet[1873]: E0213 10:01:10.305059 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:10.308121 kubelet[1873]: I0213 10:01:10.308082 1873 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 10:01:10.313278 kubelet[1873]: I0213 10:01:10.313180 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/15d6d9af-5bd0-4d52-a244-b2ec483822b5-socket-dir\") pod \"csi-node-driver-284zz\" (UID: \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\") " pod="calico-system/csi-node-driver-284zz" Feb 13 10:01:10.313278 kubelet[1873]: I0213 10:01:10.313285 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44b46b8e-e175-4021-b60f-7bf37dcdfa67-lib-modules\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.313657 kubelet[1873]: I0213 10:01:10.313451 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b46b8e-e175-4021-b60f-7bf37dcdfa67-tigera-ca-bundle\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.313657 kubelet[1873]: I0213 10:01:10.313625 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/44b46b8e-e175-4021-b60f-7bf37dcdfa67-cni-log-dir\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.313850 kubelet[1873]: I0213 10:01:10.313794 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1fde27e-de6d-429a-9253-36b302f2ceeb-xtables-lock\") pod \"kube-proxy-nx7k5\" (UID: \"c1fde27e-de6d-429a-9253-36b302f2ceeb\") " pod="kube-system/kube-proxy-nx7k5" Feb 13 10:01:10.313983 kubelet[1873]: I0213 10:01:10.313895 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6s2p\" (UniqueName: \"kubernetes.io/projected/c1fde27e-de6d-429a-9253-36b302f2ceeb-kube-api-access-g6s2p\") pod \"kube-proxy-nx7k5\" (UID: \"c1fde27e-de6d-429a-9253-36b302f2ceeb\") " pod="kube-system/kube-proxy-nx7k5" Feb 13 10:01:10.313983 kubelet[1873]: I0213 10:01:10.313959 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/44b46b8e-e175-4021-b60f-7bf37dcdfa67-var-lib-calico\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.314175 kubelet[1873]: I0213 10:01:10.314112 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdnks\" (UniqueName: \"kubernetes.io/projected/44b46b8e-e175-4021-b60f-7bf37dcdfa67-kube-api-access-jdnks\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.314295 kubelet[1873]: I0213 10:01:10.314256 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/15d6d9af-5bd0-4d52-a244-b2ec483822b5-registration-dir\") pod \"csi-node-driver-284zz\" (UID: \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\") " pod="calico-system/csi-node-driver-284zz" Feb 13 10:01:10.314430 kubelet[1873]: I0213 10:01:10.314404 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/44b46b8e-e175-4021-b60f-7bf37dcdfa67-var-run-calico\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.314652 kubelet[1873]: I0213 10:01:10.314582 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/15d6d9af-5bd0-4d52-a244-b2ec483822b5-kubelet-dir\") pod \"csi-node-driver-284zz\" (UID: \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\") " pod="calico-system/csi-node-driver-284zz" Feb 13 10:01:10.314837 kubelet[1873]: I0213 10:01:10.314701 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44b46b8e-e175-4021-b60f-7bf37dcdfa67-xtables-lock\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.314837 kubelet[1873]: I0213 10:01:10.314774 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/44b46b8e-e175-4021-b60f-7bf37dcdfa67-policysync\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.314837 kubelet[1873]: I0213 10:01:10.314840 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/44b46b8e-e175-4021-b60f-7bf37dcdfa67-node-certs\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.315150 kubelet[1873]: I0213 10:01:10.314952 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/44b46b8e-e175-4021-b60f-7bf37dcdfa67-cni-bin-dir\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.315150 kubelet[1873]: I0213 10:01:10.315084 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1fde27e-de6d-429a-9253-36b302f2ceeb-lib-modules\") pod \"kube-proxy-nx7k5\" (UID: \"c1fde27e-de6d-429a-9253-36b302f2ceeb\") " pod="kube-system/kube-proxy-nx7k5" Feb 13 10:01:10.315342 kubelet[1873]: I0213 10:01:10.315260 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/15d6d9af-5bd0-4d52-a244-b2ec483822b5-varrun\") pod \"csi-node-driver-284zz\" (UID: \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\") " pod="calico-system/csi-node-driver-284zz" Feb 13 10:01:10.315471 kubelet[1873]: I0213 10:01:10.315391 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/44b46b8e-e175-4021-b60f-7bf37dcdfa67-cni-net-dir\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.315471 kubelet[1873]: I0213 10:01:10.315466 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/44b46b8e-e175-4021-b60f-7bf37dcdfa67-flexvol-driver-host\") pod \"calico-node-rp6kh\" (UID: \"44b46b8e-e175-4021-b60f-7bf37dcdfa67\") " pod="calico-system/calico-node-rp6kh" Feb 13 10:01:10.315688 kubelet[1873]: I0213 10:01:10.315619 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1fde27e-de6d-429a-9253-36b302f2ceeb-kube-proxy\") pod \"kube-proxy-nx7k5\" (UID: \"c1fde27e-de6d-429a-9253-36b302f2ceeb\") " pod="kube-system/kube-proxy-nx7k5" Feb 13 10:01:10.315791 kubelet[1873]: I0213 10:01:10.315767 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rfv8\" (UniqueName: \"kubernetes.io/projected/15d6d9af-5bd0-4d52-a244-b2ec483822b5-kube-api-access-7rfv8\") pod \"csi-node-driver-284zz\" (UID: \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\") " pod="calico-system/csi-node-driver-284zz" Feb 13 10:01:10.315893 kubelet[1873]: I0213 10:01:10.315876 1873 reconciler.go:41] "Reconciler: start to sync state" Feb 13 10:01:10.318643 systemd[1]: Created slice kubepods-besteffort-pod44b46b8e_e175_4021_b60f_7bf37dcdfa67.slice. Feb 13 10:01:10.343091 systemd[1]: Created slice kubepods-besteffort-podc1fde27e_de6d_429a_9253_36b302f2ceeb.slice. Feb 13 10:01:10.389000 audit[1628]: USER_END pid=1628 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 10:01:10.389000 audit[1628]: CRED_DISP pid=1628 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 10:01:10.390904 sudo[1628]: pam_unix(sudo:session): session closed for user root Feb 13 10:01:10.394102 sshd[1624]: pam_unix(sshd:session): session closed for user core Feb 13 10:01:10.395000 audit[1624]: USER_END pid=1624 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:10.395000 audit[1624]: CRED_DISP pid=1624 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 10:01:10.399896 systemd[1]: sshd@6-139.178.70.83:22-139.178.68.195:51892.service: Deactivated successfully. Feb 13 10:01:10.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.83:22-139.178.68.195:51892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 10:01:10.401661 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 10:01:10.403422 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Feb 13 10:01:10.405669 systemd-logind[1463]: Removed session 9. Feb 13 10:01:10.519346 kubelet[1873]: E0213 10:01:10.519281 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.519346 kubelet[1873]: W0213 10:01:10.519326 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.519783 kubelet[1873]: E0213 10:01:10.519431 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.520014 kubelet[1873]: E0213 10:01:10.519966 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.520014 kubelet[1873]: W0213 10:01:10.519999 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.520323 kubelet[1873]: E0213 10:01:10.520038 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.520615 kubelet[1873]: E0213 10:01:10.520572 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.520615 kubelet[1873]: W0213 10:01:10.520600 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.520959 kubelet[1873]: E0213 10:01:10.520648 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.521217 kubelet[1873]: E0213 10:01:10.521176 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.521217 kubelet[1873]: W0213 10:01:10.521209 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.521478 kubelet[1873]: E0213 10:01:10.521247 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.521862 kubelet[1873]: E0213 10:01:10.521817 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.521862 kubelet[1873]: W0213 10:01:10.521849 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.522214 kubelet[1873]: E0213 10:01:10.521888 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.522492 kubelet[1873]: E0213 10:01:10.522419 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.522492 kubelet[1873]: W0213 10:01:10.522445 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.522492 kubelet[1873]: E0213 10:01:10.522479 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.623714 kubelet[1873]: E0213 10:01:10.623650 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.623714 kubelet[1873]: W0213 10:01:10.623697 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.624153 kubelet[1873]: E0213 10:01:10.623760 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.624399 kubelet[1873]: E0213 10:01:10.624338 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.624399 kubelet[1873]: W0213 10:01:10.624370 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.624769 kubelet[1873]: E0213 10:01:10.624452 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.625034 kubelet[1873]: E0213 10:01:10.624994 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.625034 kubelet[1873]: W0213 10:01:10.625026 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.625358 kubelet[1873]: E0213 10:01:10.625076 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.625651 kubelet[1873]: E0213 10:01:10.625612 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.625651 kubelet[1873]: W0213 10:01:10.625643 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.626001 kubelet[1873]: E0213 10:01:10.625693 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.626257 kubelet[1873]: E0213 10:01:10.626218 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.626257 kubelet[1873]: W0213 10:01:10.626249 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.626622 kubelet[1873]: E0213 10:01:10.626299 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.626883 kubelet[1873]: E0213 10:01:10.626845 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.626883 kubelet[1873]: W0213 10:01:10.626875 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.627183 kubelet[1873]: E0213 10:01:10.626924 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.714774 kubelet[1873]: E0213 10:01:10.714758 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.714774 kubelet[1873]: W0213 10:01:10.714768 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.714866 kubelet[1873]: E0213 10:01:10.714782 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.727280 kubelet[1873]: E0213 10:01:10.727238 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.727280 kubelet[1873]: W0213 10:01:10.727245 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.727280 kubelet[1873]: E0213 10:01:10.727252 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.727376 kubelet[1873]: E0213 10:01:10.727366 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.727398 kubelet[1873]: W0213 10:01:10.727376 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.727398 kubelet[1873]: E0213 10:01:10.727383 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.727564 kubelet[1873]: E0213 10:01:10.727516 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.727564 kubelet[1873]: W0213 10:01:10.727522 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.727564 kubelet[1873]: E0213 10:01:10.727529 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.727713 kubelet[1873]: E0213 10:01:10.727679 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.727713 kubelet[1873]: W0213 10:01:10.727685 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.727713 kubelet[1873]: E0213 10:01:10.727692 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.727878 kubelet[1873]: E0213 10:01:10.727838 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.727878 kubelet[1873]: W0213 10:01:10.727844 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.727878 kubelet[1873]: E0213 10:01:10.727851 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.829327 kubelet[1873]: E0213 10:01:10.829212 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.829327 kubelet[1873]: W0213 10:01:10.829253 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.829327 kubelet[1873]: E0213 10:01:10.829299 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.829963 kubelet[1873]: E0213 10:01:10.829875 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.829963 kubelet[1873]: W0213 10:01:10.829908 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.829963 kubelet[1873]: E0213 10:01:10.829946 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.830571 kubelet[1873]: E0213 10:01:10.830496 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.830571 kubelet[1873]: W0213 10:01:10.830522 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.830571 kubelet[1873]: E0213 10:01:10.830557 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.831167 kubelet[1873]: E0213 10:01:10.831078 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.831167 kubelet[1873]: W0213 10:01:10.831111 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.831167 kubelet[1873]: E0213 10:01:10.831149 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.831781 kubelet[1873]: E0213 10:01:10.831693 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.831781 kubelet[1873]: W0213 10:01:10.831726 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.831781 kubelet[1873]: E0213 10:01:10.831769 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.933638 kubelet[1873]: E0213 10:01:10.933528 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.933638 kubelet[1873]: W0213 10:01:10.933570 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.933638 kubelet[1873]: E0213 10:01:10.933617 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.934249 kubelet[1873]: E0213 10:01:10.934158 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.934249 kubelet[1873]: W0213 10:01:10.934191 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.934249 kubelet[1873]: E0213 10:01:10.934231 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.934891 kubelet[1873]: E0213 10:01:10.934802 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.934891 kubelet[1873]: W0213 10:01:10.934835 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.934891 kubelet[1873]: E0213 10:01:10.934874 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.935430 kubelet[1873]: E0213 10:01:10.935391 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.935430 kubelet[1873]: W0213 10:01:10.935417 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.935656 kubelet[1873]: E0213 10:01:10.935454 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:10.936042 kubelet[1873]: E0213 10:01:10.935952 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:10.936042 kubelet[1873]: W0213 10:01:10.935984 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:10.936042 kubelet[1873]: E0213 10:01:10.936022 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.037932 kubelet[1873]: E0213 10:01:11.037742 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.037932 kubelet[1873]: W0213 10:01:11.037789 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.037932 kubelet[1873]: E0213 10:01:11.037849 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.038596 kubelet[1873]: E0213 10:01:11.038489 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.038596 kubelet[1873]: W0213 10:01:11.038526 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.038596 kubelet[1873]: E0213 10:01:11.038580 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.039148 kubelet[1873]: E0213 10:01:11.039107 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.039148 kubelet[1873]: W0213 10:01:11.039138 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.039536 kubelet[1873]: E0213 10:01:11.039192 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.039791 kubelet[1873]: E0213 10:01:11.039752 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.039791 kubelet[1873]: W0213 10:01:11.039782 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.040129 kubelet[1873]: E0213 10:01:11.039832 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.040427 kubelet[1873]: E0213 10:01:11.040363 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.040427 kubelet[1873]: W0213 10:01:11.040422 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.040810 kubelet[1873]: E0213 10:01:11.040478 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.101635 kubelet[1873]: E0213 10:01:11.101532 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:11.118987 kubelet[1873]: E0213 10:01:11.118940 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.118987 kubelet[1873]: W0213 10:01:11.118952 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.118987 kubelet[1873]: E0213 10:01:11.118964 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.141455 kubelet[1873]: E0213 10:01:11.141407 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.141455 kubelet[1873]: W0213 10:01:11.141419 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.141455 kubelet[1873]: E0213 10:01:11.141434 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.141626 kubelet[1873]: E0213 10:01:11.141612 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.141626 kubelet[1873]: W0213 10:01:11.141621 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.141678 kubelet[1873]: E0213 10:01:11.141631 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.141848 kubelet[1873]: E0213 10:01:11.141810 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.141848 kubelet[1873]: W0213 10:01:11.141819 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.141848 kubelet[1873]: E0213 10:01:11.141829 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.142063 kubelet[1873]: E0213 10:01:11.142020 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.142063 kubelet[1873]: W0213 10:01:11.142030 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.142063 kubelet[1873]: E0213 10:01:11.142040 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.243504 kubelet[1873]: E0213 10:01:11.243406 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.243504 kubelet[1873]: W0213 10:01:11.243446 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.243504 kubelet[1873]: E0213 10:01:11.243490 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.244051 kubelet[1873]: E0213 10:01:11.244005 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.244051 kubelet[1873]: W0213 10:01:11.244040 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.244264 kubelet[1873]: E0213 10:01:11.244078 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.244725 kubelet[1873]: E0213 10:01:11.244647 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.244725 kubelet[1873]: W0213 10:01:11.244680 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.244725 kubelet[1873]: E0213 10:01:11.244719 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.245323 kubelet[1873]: E0213 10:01:11.245263 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.245323 kubelet[1873]: W0213 10:01:11.245296 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.245556 kubelet[1873]: E0213 10:01:11.245335 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.310359 kubelet[1873]: E0213 10:01:11.310157 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.310359 kubelet[1873]: W0213 10:01:11.310195 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.310359 kubelet[1873]: E0213 10:01:11.310239 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.346921 kubelet[1873]: E0213 10:01:11.346827 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.346921 kubelet[1873]: W0213 10:01:11.346864 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.346921 kubelet[1873]: E0213 10:01:11.346911 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.347477 kubelet[1873]: E0213 10:01:11.347408 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.347477 kubelet[1873]: W0213 10:01:11.347435 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.347477 kubelet[1873]: E0213 10:01:11.347474 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.348060 kubelet[1873]: E0213 10:01:11.347982 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.348060 kubelet[1873]: W0213 10:01:11.348015 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.348060 kubelet[1873]: E0213 10:01:11.348052 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.449734 kubelet[1873]: E0213 10:01:11.449626 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.449734 kubelet[1873]: W0213 10:01:11.449667 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.449734 kubelet[1873]: E0213 10:01:11.449713 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.450316 kubelet[1873]: E0213 10:01:11.450280 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.450316 kubelet[1873]: W0213 10:01:11.450313 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.450569 kubelet[1873]: E0213 10:01:11.450357 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.450947 kubelet[1873]: E0213 10:01:11.450915 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.451052 kubelet[1873]: W0213 10:01:11.450948 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.451052 kubelet[1873]: E0213 10:01:11.450987 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.501847 kubelet[1873]: I0213 10:01:11.501744 1873 request.go:690] Waited for 1.196145237s due to client-side throttling, not priority and fairness, request: GET:https://139.178.70.43:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 13 10:01:11.552624 kubelet[1873]: E0213 10:01:11.552524 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.552624 kubelet[1873]: W0213 10:01:11.552566 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.552624 kubelet[1873]: E0213 10:01:11.552613 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.553287 kubelet[1873]: E0213 10:01:11.553207 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.553287 kubelet[1873]: W0213 10:01:11.553241 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.553641 kubelet[1873]: E0213 10:01:11.553279 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.553979 kubelet[1873]: E0213 10:01:11.553899 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.553979 kubelet[1873]: W0213 10:01:11.553933 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.553979 kubelet[1873]: E0213 10:01:11.553971 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.655929 kubelet[1873]: E0213 10:01:11.655726 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.655929 kubelet[1873]: W0213 10:01:11.655772 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.655929 kubelet[1873]: E0213 10:01:11.655818 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.656411 kubelet[1873]: E0213 10:01:11.656343 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.656411 kubelet[1873]: W0213 10:01:11.656368 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.656609 kubelet[1873]: E0213 10:01:11.656425 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.657023 kubelet[1873]: E0213 10:01:11.656947 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.657023 kubelet[1873]: W0213 10:01:11.656980 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.657023 kubelet[1873]: E0213 10:01:11.657021 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.758222 kubelet[1873]: E0213 10:01:11.758125 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.758222 kubelet[1873]: W0213 10:01:11.758167 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.758222 kubelet[1873]: E0213 10:01:11.758214 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.758877 kubelet[1873]: E0213 10:01:11.758797 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.758877 kubelet[1873]: W0213 10:01:11.758831 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.758877 kubelet[1873]: E0213 10:01:11.758871 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.759535 kubelet[1873]: E0213 10:01:11.759453 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.759535 kubelet[1873]: W0213 10:01:11.759487 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.759535 kubelet[1873]: E0213 10:01:11.759527 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.860829 kubelet[1873]: E0213 10:01:11.860734 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.860829 kubelet[1873]: W0213 10:01:11.860778 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.860829 kubelet[1873]: E0213 10:01:11.860825 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.861439 kubelet[1873]: E0213 10:01:11.861402 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.861439 kubelet[1873]: W0213 10:01:11.861439 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.861658 kubelet[1873]: E0213 10:01:11.861479 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.862127 kubelet[1873]: E0213 10:01:11.862047 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.862127 kubelet[1873]: W0213 10:01:11.862081 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.862127 kubelet[1873]: E0213 10:01:11.862121 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.922419 kubelet[1873]: E0213 10:01:11.922384 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.922419 kubelet[1873]: W0213 10:01:11.922413 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.922419 kubelet[1873]: E0213 10:01:11.922424 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.963718 kubelet[1873]: E0213 10:01:11.963619 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.963718 kubelet[1873]: W0213 10:01:11.963663 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.963718 kubelet[1873]: E0213 10:01:11.963708 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:11.964351 kubelet[1873]: E0213 10:01:11.964311 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:11.964351 kubelet[1873]: W0213 10:01:11.964348 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:11.964605 kubelet[1873]: E0213 10:01:11.964405 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:12.065498 kubelet[1873]: E0213 10:01:12.065434 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:12.065498 kubelet[1873]: W0213 10:01:12.065477 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:12.065886 kubelet[1873]: E0213 10:01:12.065522 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:12.066191 kubelet[1873]: E0213 10:01:12.066109 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:12.066191 kubelet[1873]: W0213 10:01:12.066149 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:12.066191 kubelet[1873]: E0213 10:01:12.066189 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:12.101851 kubelet[1873]: E0213 10:01:12.101744 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:12.119937 kubelet[1873]: E0213 10:01:12.119876 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:12.119937 kubelet[1873]: W0213 10:01:12.119916 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:12.120460 kubelet[1873]: E0213 10:01:12.119961 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:12.137255 env[1473]: time="2024-02-13T10:01:12.137196607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rp6kh,Uid:44b46b8e-e175-4021-b60f-7bf37dcdfa67,Namespace:calico-system,Attempt:0,}" Feb 13 10:01:12.147925 env[1473]: time="2024-02-13T10:01:12.147875510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nx7k5,Uid:c1fde27e-de6d-429a-9253-36b302f2ceeb,Namespace:kube-system,Attempt:0,}" Feb 13 10:01:12.167996 kubelet[1873]: E0213 10:01:12.167902 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:12.167996 kubelet[1873]: W0213 10:01:12.167943 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:12.167996 kubelet[1873]: E0213 10:01:12.167989 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:12.269797 kubelet[1873]: E0213 10:01:12.269625 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:12.269797 kubelet[1873]: W0213 10:01:12.269658 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:12.269797 kubelet[1873]: E0213 10:01:12.269693 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:12.283948 kubelet[1873]: E0213 10:01:12.283855 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:12.318436 kubelet[1873]: E0213 10:01:12.318421 1873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 10:01:12.318436 kubelet[1873]: W0213 10:01:12.318430 1873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 10:01:12.318539 kubelet[1873]: E0213 10:01:12.318440 1873 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 10:01:12.776947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133328242.mount: Deactivated successfully. Feb 13 10:01:12.778667 env[1473]: time="2024-02-13T10:01:12.778650013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:12.779519 env[1473]: time="2024-02-13T10:01:12.779476821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:12.780072 env[1473]: time="2024-02-13T10:01:12.780059944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:12.780732 env[1473]: time="2024-02-13T10:01:12.780718994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:12.781084 env[1473]: time="2024-02-13T10:01:12.781073151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:12.782236 env[1473]: time="2024-02-13T10:01:12.782223635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:12.782664 env[1473]: time="2024-02-13T10:01:12.782652118Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:12.783810 env[1473]: time="2024-02-13T10:01:12.783799229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:12.790841 env[1473]: time="2024-02-13T10:01:12.790785714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 10:01:12.790841 env[1473]: time="2024-02-13T10:01:12.790805617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 10:01:12.790841 env[1473]: time="2024-02-13T10:01:12.790812455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 10:01:12.790943 env[1473]: time="2024-02-13T10:01:12.790871708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a663acb06a4214e39bf25dfd454ce069e3f70e1934c4af6383d0be8f66f4374 pid=2053 runtime=io.containerd.runc.v2 Feb 13 10:01:12.791851 env[1473]: time="2024-02-13T10:01:12.791792800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 10:01:12.791851 env[1473]: time="2024-02-13T10:01:12.791825295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 10:01:12.791851 env[1473]: time="2024-02-13T10:01:12.791846962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 10:01:12.791917 env[1473]: time="2024-02-13T10:01:12.791901455Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c25897eb2f3f25b4b94ac9edd9842c306dceac1cba31e5531bb25f29bf03b46e pid=2061 runtime=io.containerd.runc.v2 Feb 13 10:01:12.797840 systemd[1]: Started cri-containerd-3a663acb06a4214e39bf25dfd454ce069e3f70e1934c4af6383d0be8f66f4374.scope. Feb 13 10:01:12.798652 systemd[1]: Started cri-containerd-c25897eb2f3f25b4b94ac9edd9842c306dceac1cba31e5531bb25f29bf03b46e.scope. Feb 13 10:01:12.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.809658 kernel: kauditd_printk_skb: 477 callbacks suppressed Feb 13 10:01:12.809687 kernel: audit: type=1400 audit(1707818472.802:580): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.868257 kernel: audit: type=1400 audit(1707818472.802:581): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.868282 kernel: audit: type=1400 audit(1707818472.802:582): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.922512 kernel: audit: type=1400 audit(1707818472.802:583): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.978001 kernel: audit: type=1400 audit(1707818472.802:584): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.034496 kernel: audit: type=1400 audit(1707818472.802:585): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.092511 kernel: audit: type=1400 audit(1707818472.802:586): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.102859 kubelet[1873]: E0213 10:01:13.102819 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:13.152258 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Feb 13 10:01:13.152284 kernel: audit: type=1400 audit(1707818472.802:587): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.152296 kernel: audit: type=1400 audit(1707818472.802:588): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.802000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.802000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit: BPF prog-id=67 op=LOAD Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2053 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:12.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363633616362303661343231346533396266323564666434353463 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=c items=0 ppid=2053 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:12.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363633616362303661343231346533396266323564666434353463 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.830000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.830000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.829000 audit: BPF prog-id=68 op=LOAD Feb 13 10:01:12.829000 audit[2073]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001479d8 a2=78 a3=c0001d6ab0 items=0 ppid=2053 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:12.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363633616362303661343231346533396266323564666434353463 Feb 13 10:01:12.921000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit: BPF prog-id=69 op=LOAD Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001c7c48 a2=10 a3=1c items=0 ppid=2061 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:12.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353839376562326633663235623462393461633965646439383432 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001c76b0 a2=3c a3=c items=0 ppid=2061 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:12.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353839376562326633663235623462393461633965646439383432 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit: BPF prog-id=70 op=LOAD Feb 13 10:01:12.921000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:12.921000 audit[2073]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000147770 a2=78 a3=c0001d6af8 items=0 ppid=2053 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:12.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363633616362303661343231346533396266323564666434353463 Feb 13 10:01:13.150000 audit: BPF prog-id=70 op=UNLOAD Feb 13 10:01:13.150000 audit: BPF prog-id=68 op=UNLOAD Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { perfmon } for pid=2073 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit[2073]: AVC avc: denied { bpf } for pid=2073 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.150000 audit: BPF prog-id=72 op=LOAD Feb 13 10:01:13.150000 audit[2073]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000147c30 a2=78 a3=c0001d6f08 items=0 ppid=2053 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:13.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361363633616362303661343231346533396266323564666434353463 Feb 13 10:01:12.921000 audit[2078]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001c79d8 a2=78 a3=c0002079d0 items=0 ppid=2061 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:12.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353839376562326633663235623462393461633965646439383432 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit: BPF prog-id=73 op=LOAD Feb 13 10:01:13.300000 audit[2078]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001c7770 a2=78 a3=c000207a18 items=0 ppid=2061 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:13.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353839376562326633663235623462393461633965646439383432 Feb 13 10:01:13.300000 audit: BPF prog-id=73 op=UNLOAD Feb 13 10:01:13.300000 audit: BPF prog-id=71 op=UNLOAD Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { perfmon } for pid=2078 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit[2078]: AVC avc: denied { bpf } for pid=2078 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:13.300000 audit: BPF prog-id=74 op=LOAD Feb 13 10:01:13.300000 audit[2078]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001c7c30 a2=78 a3=c000207e28 items=0 ppid=2061 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:13.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353839376562326633663235623462393461633965646439383432 Feb 13 10:01:13.306807 env[1473]: time="2024-02-13T10:01:13.306780999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rp6kh,Uid:44b46b8e-e175-4021-b60f-7bf37dcdfa67,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a663acb06a4214e39bf25dfd454ce069e3f70e1934c4af6383d0be8f66f4374\"" Feb 13 10:01:13.307069 env[1473]: time="2024-02-13T10:01:13.306810428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nx7k5,Uid:c1fde27e-de6d-429a-9253-36b302f2ceeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c25897eb2f3f25b4b94ac9edd9842c306dceac1cba31e5531bb25f29bf03b46e\"" Feb 13 10:01:13.307702 env[1473]: time="2024-02-13T10:01:13.307690150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 13 10:01:14.103649 kubelet[1873]: E0213 10:01:14.103554 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:14.283682 kubelet[1873]: E0213 10:01:14.283604 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:14.412095 systemd-timesyncd[1420]: Timed out waiting for reply from 65.73.197.211:123 (0.flatcar.pool.ntp.org). Feb 13 10:01:14.473250 systemd-timesyncd[1420]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Feb 13 10:01:14.473403 systemd-timesyncd[1420]: Initial clock synchronization to Tue 2024-02-13 10:01:14.582619 UTC. Feb 13 10:01:15.104623 kubelet[1873]: E0213 10:01:15.104520 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:16.105016 kubelet[1873]: E0213 10:01:16.104935 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:16.284300 kubelet[1873]: E0213 10:01:16.284224 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:17.105566 kubelet[1873]: E0213 10:01:17.105467 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:17.299763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274607738.mount: Deactivated successfully. Feb 13 10:01:18.106563 kubelet[1873]: E0213 10:01:18.106451 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:18.284756 kubelet[1873]: E0213 10:01:18.284647 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:19.057588 kubelet[1873]: I0213 10:01:19.057478 1873 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 10:01:19.107447 kubelet[1873]: E0213 10:01:19.107326 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:20.108193 kubelet[1873]: E0213 10:01:20.108115 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:20.284632 kubelet[1873]: E0213 10:01:20.284528 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:21.108579 kubelet[1873]: E0213 10:01:21.108502 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:22.108979 kubelet[1873]: E0213 10:01:22.108897 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:22.284457 kubelet[1873]: E0213 10:01:22.284356 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:23.109584 kubelet[1873]: E0213 10:01:23.109471 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:24.110311 kubelet[1873]: E0213 10:01:24.110211 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:24.284265 kubelet[1873]: E0213 10:01:24.284156 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:25.110689 kubelet[1873]: E0213 10:01:25.110570 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:26.111552 kubelet[1873]: E0213 10:01:26.111438 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:26.284613 kubelet[1873]: E0213 10:01:26.284504 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:27.112568 kubelet[1873]: E0213 10:01:27.112454 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:28.113728 kubelet[1873]: E0213 10:01:28.113622 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:28.284248 kubelet[1873]: E0213 10:01:28.284134 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:29.101614 kubelet[1873]: E0213 10:01:29.101509 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:29.114017 kubelet[1873]: E0213 10:01:29.113960 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:30.114163 kubelet[1873]: E0213 10:01:30.114122 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:30.283762 kubelet[1873]: E0213 10:01:30.283650 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:31.114454 kubelet[1873]: E0213 10:01:31.114348 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:32.115092 kubelet[1873]: E0213 10:01:32.114939 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:32.283872 kubelet[1873]: E0213 10:01:32.283798 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:33.115352 kubelet[1873]: E0213 10:01:33.115238 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:34.116095 kubelet[1873]: E0213 10:01:34.115988 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:34.284274 kubelet[1873]: E0213 10:01:34.284174 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:35.116323 kubelet[1873]: E0213 10:01:35.116252 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:36.117486 kubelet[1873]: E0213 10:01:36.117324 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:36.284122 kubelet[1873]: E0213 10:01:36.284001 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:37.118057 kubelet[1873]: E0213 10:01:37.117944 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:38.119297 kubelet[1873]: E0213 10:01:38.119189 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:38.284043 kubelet[1873]: E0213 10:01:38.283937 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:39.119681 kubelet[1873]: E0213 10:01:39.119585 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:39.740824 env[1473]: time="2024-02-13T10:01:39.740796723Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:39.741503 env[1473]: time="2024-02-13T10:01:39.741490556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:39.742732 env[1473]: time="2024-02-13T10:01:39.742678150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:39.743962 env[1473]: time="2024-02-13T10:01:39.743916364Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:39.744944 env[1473]: time="2024-02-13T10:01:39.744902720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 13 10:01:39.745238 env[1473]: time="2024-02-13T10:01:39.745219923Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 13 10:01:39.746126 env[1473]: time="2024-02-13T10:01:39.746111799Z" level=info msg="CreateContainer within sandbox \"3a663acb06a4214e39bf25dfd454ce069e3f70e1934c4af6383d0be8f66f4374\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 10:01:39.751971 env[1473]: time="2024-02-13T10:01:39.751957035Z" level=info msg="CreateContainer within sandbox \"3a663acb06a4214e39bf25dfd454ce069e3f70e1934c4af6383d0be8f66f4374\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6546002e45d77eab783a09bb06448de763569b22eb4d0c8ba50f846123456f98\"" Feb 13 10:01:39.752285 env[1473]: time="2024-02-13T10:01:39.752275277Z" level=info msg="StartContainer for \"6546002e45d77eab783a09bb06448de763569b22eb4d0c8ba50f846123456f98\"" Feb 13 10:01:39.762485 systemd[1]: Started cri-containerd-6546002e45d77eab783a09bb06448de763569b22eb4d0c8ba50f846123456f98.scope. Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.795726 kernel: kauditd_printk_skb: 106 callbacks suppressed Feb 13 10:01:39.795766 kernel: audit: type=1400 audit(1707818499.767:616): avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7fa6ec8bf4d8 items=0 ppid=2053 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:39.956105 kernel: audit: type=1300 audit(1707818499.767:616): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7fa6ec8bf4d8 items=0 ppid=2053 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:39.956137 kernel: audit: type=1327 audit(1707818499.767:616): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635343630303265343564373765616237383361303962623036343438 Feb 13 10:01:39.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635343630303265343564373765616237383361303962623036343438 Feb 13 10:01:40.048411 kernel: audit: type=1400 audit(1707818499.767:617): avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.111447 kernel: audit: type=1400 audit(1707818499.767:617): avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.119689 kubelet[1873]: E0213 10:01:40.119649 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:40.174395 kernel: audit: type=1400 audit(1707818499.767:617): avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.237296 kernel: audit: type=1400 audit(1707818499.767:617): avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.283313 kubelet[1873]: E0213 10:01:40.283273 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:40.300708 kernel: audit: type=1400 audit(1707818499.767:617): avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.364053 kernel: audit: type=1400 audit(1707818499.767:617): avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.368122 env[1473]: time="2024-02-13T10:01:40.368097984Z" level=info msg="StartContainer for \"6546002e45d77eab783a09bb06448de763569b22eb4d0c8ba50f846123456f98\" returns successfully" Feb 13 10:01:40.427440 kernel: audit: type=1400 audit(1707818499.767:617): avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.427749 systemd[1]: cri-containerd-6546002e45d77eab783a09bb06448de763569b22eb4d0c8ba50f846123456f98.scope: Deactivated successfully. Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.767000 audit: BPF prog-id=75 op=LOAD Feb 13 10:01:39.767000 audit[2132]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00023ac58 items=0 ppid=2053 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:39.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635343630303265343564373765616237383361303962623036343438 Feb 13 10:01:39.858000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.858000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.858000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.858000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.858000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.858000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.858000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.858000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.858000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:39.858000 audit: BPF prog-id=76 op=LOAD Feb 13 10:01:39.858000 audit[2132]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c00023aca8 items=0 ppid=2053 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:39.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635343630303265343564373765616237383361303962623036343438 Feb 13 10:01:40.047000 audit: BPF prog-id=76 op=UNLOAD Feb 13 10:01:40.047000 audit: BPF prog-id=75 op=UNLOAD Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { perfmon } for pid=2132 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit[2132]: AVC avc: denied { bpf } for pid=2132 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:40.047000 audit: BPF prog-id=77 op=LOAD Feb 13 10:01:40.047000 audit[2132]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00023ad38 items=0 ppid=2053 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:40.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635343630303265343564373765616237383361303962623036343438 Feb 13 10:01:40.494000 audit: BPF prog-id=77 op=UNLOAD Feb 13 10:01:40.560051 env[1473]: time="2024-02-13T10:01:40.559965219Z" level=info msg="shim disconnected" id=6546002e45d77eab783a09bb06448de763569b22eb4d0c8ba50f846123456f98 Feb 13 10:01:40.560051 env[1473]: time="2024-02-13T10:01:40.559992998Z" level=warning msg="cleaning up after shim disconnected" id=6546002e45d77eab783a09bb06448de763569b22eb4d0c8ba50f846123456f98 namespace=k8s.io Feb 13 10:01:40.560051 env[1473]: time="2024-02-13T10:01:40.559998785Z" level=info msg="cleaning up dead shim" Feb 13 10:01:40.563795 env[1473]: time="2024-02-13T10:01:40.563741650Z" level=warning msg="cleanup warnings time=\"2024-02-13T10:01:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2172 runtime=io.containerd.runc.v2\n" Feb 13 10:01:40.750777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6546002e45d77eab783a09bb06448de763569b22eb4d0c8ba50f846123456f98-rootfs.mount: Deactivated successfully. Feb 13 10:01:40.835838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249633796.mount: Deactivated successfully. Feb 13 10:01:41.120222 kubelet[1873]: E0213 10:01:41.120141 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:41.127699 env[1473]: time="2024-02-13T10:01:41.127648927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:41.128481 env[1473]: time="2024-02-13T10:01:41.128426203Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:41.129198 env[1473]: time="2024-02-13T10:01:41.129154311Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:41.129919 env[1473]: time="2024-02-13T10:01:41.129876243Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:41.130184 env[1473]: time="2024-02-13T10:01:41.130134161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 13 10:01:41.131108 env[1473]: time="2024-02-13T10:01:41.131060263Z" level=info msg="CreateContainer within sandbox \"c25897eb2f3f25b4b94ac9edd9842c306dceac1cba31e5531bb25f29bf03b46e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 10:01:41.136710 env[1473]: time="2024-02-13T10:01:41.136661426Z" level=info msg="CreateContainer within sandbox \"c25897eb2f3f25b4b94ac9edd9842c306dceac1cba31e5531bb25f29bf03b46e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cdc04541955f3e5a833d4d2f7957118b978fdb062e6ed988b4030d9bcda0b17f\"" Feb 13 10:01:41.136936 env[1473]: time="2024-02-13T10:01:41.136901065Z" level=info msg="StartContainer for \"cdc04541955f3e5a833d4d2f7957118b978fdb062e6ed988b4030d9bcda0b17f\"" Feb 13 10:01:41.144959 systemd[1]: Started cri-containerd-cdc04541955f3e5a833d4d2f7957118b978fdb062e6ed988b4030d9bcda0b17f.scope. Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7f42d1bf5aa8 items=0 ppid=2061 pid=2192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364633034353431393535663365356138333364346432663739353731 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit: BPF prog-id=78 op=LOAD Feb 13 10:01:41.152000 audit[2192]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0000d7ba8 items=0 ppid=2061 pid=2192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364633034353431393535663365356138333364346432663739353731 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit: BPF prog-id=79 op=LOAD Feb 13 10:01:41.152000 audit[2192]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0000d7bf8 items=0 ppid=2061 pid=2192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364633034353431393535663365356138333364346432663739353731 Feb 13 10:01:41.152000 audit: BPF prog-id=79 op=UNLOAD Feb 13 10:01:41.152000 audit: BPF prog-id=78 op=UNLOAD Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { perfmon } for pid=2192 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit[2192]: AVC avc: denied { bpf } for pid=2192 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:41.152000 audit: BPF prog-id=80 op=LOAD Feb 13 10:01:41.152000 audit[2192]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0000d7c88 items=0 ppid=2061 pid=2192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364633034353431393535663365356138333364346432663739353731 Feb 13 10:01:41.163732 env[1473]: time="2024-02-13T10:01:41.163680610Z" level=info msg="StartContainer for \"cdc04541955f3e5a833d4d2f7957118b978fdb062e6ed988b4030d9bcda0b17f\" returns successfully" Feb 13 10:01:41.217000 audit[2247]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.217000 audit[2247]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7a42a540 a2=0 a3=7ffd7a42a52c items=0 ppid=2202 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.217000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 13 10:01:41.217000 audit[2248]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.217000 audit[2248]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd7e84730 a2=0 a3=7ffcd7e8471c items=0 ppid=2202 pid=2248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.217000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 13 10:01:41.219000 audit[2249]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.219000 audit[2249]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8dd68bd0 a2=0 a3=7fff8dd68bbc items=0 ppid=2202 pid=2249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 13 10:01:41.220000 audit[2250]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=2250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.220000 audit[2250]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8f03fdd0 a2=0 a3=7ffe8f03fdbc items=0 ppid=2202 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 13 10:01:41.221000 audit[2253]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.221000 audit[2253]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe532caad0 a2=0 a3=7ffe532caabc items=0 ppid=2202 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 13 10:01:41.221000 audit[2255]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.221000 audit[2255]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd87324150 a2=0 a3=7ffd8732413c items=0 ppid=2202 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 13 10:01:41.325000 audit[2256]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.325000 audit[2256]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe3c070c10 a2=0 a3=7ffe3c070bfc items=0 ppid=2202 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 13 10:01:41.332000 audit[2258]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.332000 audit[2258]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff4e6c1d80 a2=0 a3=7fff4e6c1d6c items=0 ppid=2202 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 13 10:01:41.340000 audit[2261]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.340000 audit[2261]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdce810c90 a2=0 a3=7ffdce810c7c items=0 ppid=2202 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 13 10:01:41.343000 audit[2262]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.343000 audit[2262]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc177500d0 a2=0 a3=7ffc177500bc items=0 ppid=2202 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.343000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 13 10:01:41.348000 audit[2264]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.348000 audit[2264]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff9af76920 a2=0 a3=7fff9af7690c items=0 ppid=2202 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 13 10:01:41.352000 audit[2265]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.352000 audit[2265]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc93ae2a50 a2=0 a3=7ffc93ae2a3c items=0 ppid=2202 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 13 10:01:41.358000 audit[2267]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2267 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.358000 audit[2267]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd157ec020 a2=0 a3=7ffd157ec00c items=0 ppid=2202 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 13 10:01:41.363242 env[1473]: time="2024-02-13T10:01:41.363169897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 13 10:01:41.367777 kubelet[1873]: I0213 10:01:41.367733 1873 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nx7k5" podStartSLOduration=-9.22337200448712e+09 pod.CreationTimestamp="2024-02-13 10:01:09 +0000 UTC" firstStartedPulling="2024-02-13 10:01:13.307534249 +0000 UTC m=+4.440742401" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 10:01:41.367106488 +0000 UTC m=+32.500314703" watchObservedRunningTime="2024-02-13 10:01:41.367656652 +0000 UTC m=+32.500864864" Feb 13 10:01:41.367000 audit[2270]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2270 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.367000 audit[2270]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff07b84780 a2=0 a3=7fff07b8476c items=0 ppid=2202 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.367000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 13 10:01:41.369000 audit[2271]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.369000 audit[2271]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0a0a6a50 a2=0 a3=7fff0a0a6a3c items=0 ppid=2202 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.369000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 13 10:01:41.375000 audit[2273]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.375000 audit[2273]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca7d3f300 a2=0 a3=7ffca7d3f2ec items=0 ppid=2202 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.375000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 13 10:01:41.379000 audit[2274]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2274 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.379000 audit[2274]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2dac3080 a2=0 a3=7ffe2dac306c items=0 ppid=2202 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 13 10:01:41.386000 audit[2276]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.386000 audit[2276]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd877d8f30 a2=0 a3=7ffd877d8f1c items=0 ppid=2202 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.386000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 13 10:01:41.394000 audit[2279]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.394000 audit[2279]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc7943190 a2=0 a3=7fffc794317c items=0 ppid=2202 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 13 10:01:41.404000 audit[2282]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2282 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.404000 audit[2282]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc21e22080 a2=0 a3=7ffc21e2206c items=0 ppid=2202 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 13 10:01:41.406000 audit[2283]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.406000 audit[2283]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2f64c740 a2=0 a3=7ffd2f64c72c items=0 ppid=2202 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 13 10:01:41.412000 audit[2285]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.412000 audit[2285]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff3f6e6d40 a2=0 a3=7fff3f6e6d2c items=0 ppid=2202 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 13 10:01:41.420000 audit[2288]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 10:01:41.420000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff53ea3220 a2=0 a3=7fff53ea320c items=0 ppid=2202 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 13 10:01:41.447000 audit[2297]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:01:41.447000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffef19a67f0 a2=0 a3=7ffef19a67dc items=0 ppid=2202 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:01:41.468000 audit[2297]: NETFILTER_CFG table=nat:59 family=2 entries=24 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:01:41.468000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffef19a67f0 a2=0 a3=7ffef19a67dc items=0 ppid=2202 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:01:41.472000 audit[2303]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.472000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcb120a200 a2=0 a3=7ffcb120a1ec items=0 ppid=2202 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 13 10:01:41.478000 audit[2305]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.478000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff7f64a9d0 a2=0 a3=7fff7f64a9bc items=0 ppid=2202 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 13 10:01:41.496000 audit[2308]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.496000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd33e3d070 a2=0 a3=7ffd33e3d05c items=0 ppid=2202 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 13 10:01:41.498000 audit[2309]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.498000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4db1a0a0 a2=0 a3=7ffd4db1a08c items=0 ppid=2202 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 13 10:01:41.504000 audit[2311]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.504000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc2b4ffe20 a2=0 a3=7ffc2b4ffe0c items=0 ppid=2202 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 13 10:01:41.507000 audit[2312]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.507000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2a6da260 a2=0 a3=7ffd2a6da24c items=0 ppid=2202 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 13 10:01:41.513000 audit[2314]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2314 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.513000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd0e1b13c0 a2=0 a3=7ffd0e1b13ac items=0 ppid=2202 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 13 10:01:41.522000 audit[2317]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.522000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcb6e591e0 a2=0 a3=7ffcb6e591cc items=0 ppid=2202 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 13 10:01:41.525000 audit[2318]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.525000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc55f5ec70 a2=0 a3=7ffc55f5ec5c items=0 ppid=2202 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.525000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 13 10:01:41.531000 audit[2320]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.531000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc75424460 a2=0 a3=7ffc7542444c items=0 ppid=2202 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 13 10:01:41.534000 audit[2321]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2321 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.534000 audit[2321]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdee571630 a2=0 a3=7ffdee57161c items=0 ppid=2202 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 13 10:01:41.540000 audit[2323]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2323 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.540000 audit[2323]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdca916280 a2=0 a3=7ffdca91626c items=0 ppid=2202 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 13 10:01:41.549000 audit[2326]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.549000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffea8f1cfb0 a2=0 a3=7ffea8f1cf9c items=0 ppid=2202 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 13 10:01:41.558000 audit[2329]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.558000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdbae75680 a2=0 a3=7ffdbae7566c items=0 ppid=2202 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 13 10:01:41.561000 audit[2330]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.561000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffded2e9320 a2=0 a3=7ffded2e930c items=0 ppid=2202 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.561000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 13 10:01:41.566000 audit[2332]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.566000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff29a48b00 a2=0 a3=7fff29a48aec items=0 ppid=2202 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 13 10:01:41.575000 audit[2335]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2335 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 10:01:41.575000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe393261c0 a2=0 a3=7ffe393261ac items=0 ppid=2202 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 13 10:01:41.588000 audit[2339]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2339 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 13 10:01:41.588000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc3ace1130 a2=0 a3=7ffc3ace111c items=0 ppid=2202 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.588000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:01:41.589000 audit[2339]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 13 10:01:41.589000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffc3ace1130 a2=0 a3=7ffc3ace111c items=0 ppid=2202 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:41.589000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:01:42.120880 kubelet[1873]: E0213 10:01:42.120820 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:42.283787 kubelet[1873]: E0213 10:01:42.283674 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:42.815564 update_engine[1465]: I0213 10:01:42.815459 1465 update_attempter.cc:509] Updating boot flags... Feb 13 10:01:43.121709 kubelet[1873]: E0213 10:01:43.121522 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:44.122387 kubelet[1873]: E0213 10:01:44.122266 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:44.284119 kubelet[1873]: E0213 10:01:44.284034 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:44.681017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393662128.mount: Deactivated successfully. Feb 13 10:01:45.123578 kubelet[1873]: E0213 10:01:45.123367 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:46.124412 kubelet[1873]: E0213 10:01:46.124293 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:46.283567 kubelet[1873]: E0213 10:01:46.283517 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:47.124618 kubelet[1873]: E0213 10:01:47.124546 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:48.125587 kubelet[1873]: E0213 10:01:48.125480 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:48.284067 kubelet[1873]: E0213 10:01:48.283962 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:49.101600 kubelet[1873]: E0213 10:01:49.101486 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:49.126824 kubelet[1873]: E0213 10:01:49.126704 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:50.126972 kubelet[1873]: E0213 10:01:50.126867 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:50.284178 kubelet[1873]: E0213 10:01:50.284078 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:51.128094 kubelet[1873]: E0213 10:01:51.127986 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:52.128998 kubelet[1873]: E0213 10:01:52.128890 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:52.283657 kubelet[1873]: E0213 10:01:52.283580 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:53.129795 kubelet[1873]: E0213 10:01:53.129684 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:54.130472 kubelet[1873]: E0213 10:01:54.130356 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:54.284003 kubelet[1873]: E0213 10:01:54.283940 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:55.131211 kubelet[1873]: E0213 10:01:55.131131 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:56.132250 kubelet[1873]: E0213 10:01:56.132202 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:56.284750 kubelet[1873]: E0213 10:01:56.284642 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:57.132661 kubelet[1873]: E0213 10:01:57.132556 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:58.133588 kubelet[1873]: E0213 10:01:58.133542 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:58.284106 kubelet[1873]: E0213 10:01:58.284062 1873 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:01:58.852629 env[1473]: time="2024-02-13T10:01:58.852579913Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:58.853245 env[1473]: time="2024-02-13T10:01:58.853200043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:58.854739 env[1473]: time="2024-02-13T10:01:58.854724820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:58.855626 env[1473]: time="2024-02-13T10:01:58.855614359Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 10:01:58.856155 env[1473]: time="2024-02-13T10:01:58.856128956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 13 10:01:58.857329 env[1473]: time="2024-02-13T10:01:58.857315018Z" level=info msg="CreateContainer within sandbox \"3a663acb06a4214e39bf25dfd454ce069e3f70e1934c4af6383d0be8f66f4374\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 10:01:58.862116 env[1473]: time="2024-02-13T10:01:58.862070020Z" level=info msg="CreateContainer within sandbox \"3a663acb06a4214e39bf25dfd454ce069e3f70e1934c4af6383d0be8f66f4374\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d69e634b471e53557b4868d52116f627d04aca876ec0a23af9b4cc2c24607f79\"" Feb 13 10:01:58.862450 env[1473]: time="2024-02-13T10:01:58.862370024Z" level=info msg="StartContainer for \"d69e634b471e53557b4868d52116f627d04aca876ec0a23af9b4cc2c24607f79\"" Feb 13 10:01:58.870850 systemd[1]: Started cri-containerd-d69e634b471e53557b4868d52116f627d04aca876ec0a23af9b4cc2c24607f79.scope. Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.904987 kernel: kauditd_printk_skb: 209 callbacks suppressed Feb 13 10:01:58.905053 kernel: audit: type=1400 audit(1707818518.876:673): avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit[2363]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=7fce003bacc8 items=0 ppid=2053 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:59.065556 kernel: audit: type=1300 audit(1707818518.876:673): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=7fce003bacc8 items=0 ppid=2053 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:59.065587 kernel: audit: type=1327 audit(1707818518.876:673): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396536333462343731653533353537623438363864353231313666 Feb 13 10:01:58.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396536333462343731653533353537623438363864353231313666 Feb 13 10:01:59.133929 kubelet[1873]: E0213 10:01:59.133873 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.221444 kernel: audit: type=1400 audit(1707818518.876:674): avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.221468 kernel: audit: type=1400 audit(1707818518.876:674): avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.347854 kernel: audit: type=1400 audit(1707818518.876:674): avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.347878 kernel: audit: type=1400 audit(1707818518.876:674): avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.352907 env[1473]: time="2024-02-13T10:01:59.352888414Z" level=info msg="StartContainer for \"d69e634b471e53557b4868d52116f627d04aca876ec0a23af9b4cc2c24607f79\" returns successfully" Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.474969 kernel: audit: type=1400 audit(1707818518.876:674): avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.475003 kernel: audit: type=1400 audit(1707818518.876:674): avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.602733 kernel: audit: type=1400 audit(1707818518.876:674): avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.876000 audit: BPF prog-id=81 op=LOAD Feb 13 10:01:58.876000 audit[2363]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001479d8 a2=78 a3=c000304888 items=0 ppid=2053 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:58.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396536333462343731653533353537623438363864353231313666 Feb 13 10:01:58.967000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.967000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.967000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.967000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.967000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.967000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.967000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.967000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.967000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:58.967000 audit: BPF prog-id=82 op=LOAD Feb 13 10:01:58.967000 audit[2363]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000147770 a2=78 a3=c0003048d8 items=0 ppid=2053 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:58.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396536333462343731653533353537623438363864353231313666 Feb 13 10:01:59.156000 audit: BPF prog-id=82 op=UNLOAD Feb 13 10:01:59.156000 audit: BPF prog-id=81 op=UNLOAD Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { perfmon } for pid=2363 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit[2363]: AVC avc: denied { bpf } for pid=2363 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 10:01:59.156000 audit: BPF prog-id=83 op=LOAD Feb 13 10:01:59.156000 audit[2363]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000147c30 a2=78 a3=c000304968 items=0 ppid=2053 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:01:59.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436396536333462343731653533353537623438363864353231313666 Feb 13 10:02:00.013503 env[1473]: time="2024-02-13T10:02:00.013322788Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 10:02:00.018554 systemd[1]: cri-containerd-d69e634b471e53557b4868d52116f627d04aca876ec0a23af9b4cc2c24607f79.scope: Deactivated successfully. Feb 13 10:02:00.027000 audit: BPF prog-id=83 op=UNLOAD Feb 13 10:02:00.055524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d69e634b471e53557b4868d52116f627d04aca876ec0a23af9b4cc2c24607f79-rootfs.mount: Deactivated successfully. Feb 13 10:02:00.072805 kubelet[1873]: I0213 10:02:00.072725 1873 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 10:02:00.135086 kubelet[1873]: E0213 10:02:00.134988 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:00.295866 systemd[1]: Created slice kubepods-besteffort-pod15d6d9af_5bd0_4d52_a244_b2ec483822b5.slice. Feb 13 10:02:00.300235 env[1473]: time="2024-02-13T10:02:00.300126357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-284zz,Uid:15d6d9af-5bd0-4d52-a244-b2ec483822b5,Namespace:calico-system,Attempt:0,}" Feb 13 10:02:00.702227 env[1473]: time="2024-02-13T10:02:00.702088563Z" level=info msg="shim disconnected" id=d69e634b471e53557b4868d52116f627d04aca876ec0a23af9b4cc2c24607f79 Feb 13 10:02:00.702227 env[1473]: time="2024-02-13T10:02:00.702193142Z" level=warning msg="cleaning up after shim disconnected" id=d69e634b471e53557b4868d52116f627d04aca876ec0a23af9b4cc2c24607f79 namespace=k8s.io Feb 13 10:02:00.702227 env[1473]: time="2024-02-13T10:02:00.702219386Z" level=info msg="cleaning up dead shim" Feb 13 10:02:00.710806 env[1473]: time="2024-02-13T10:02:00.710749182Z" level=warning msg="cleanup warnings time=\"2024-02-13T10:02:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2426 runtime=io.containerd.runc.v2\n" Feb 13 10:02:00.733547 env[1473]: time="2024-02-13T10:02:00.733494941Z" level=error msg="Failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:00.733788 env[1473]: time="2024-02-13T10:02:00.733743506Z" level=error msg="encountered an error cleaning up failed sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:00.733788 env[1473]: time="2024-02-13T10:02:00.733779159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-284zz,Uid:15d6d9af-5bd0-4d52-a244-b2ec483822b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:00.733998 kubelet[1873]: E0213 10:02:00.733963 1873 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:00.734043 kubelet[1873]: E0213 10:02:00.734005 1873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-284zz" Feb 13 10:02:00.734043 kubelet[1873]: E0213 10:02:00.734022 1873 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-284zz" Feb 13 10:02:00.734095 kubelet[1873]: E0213 10:02:00.734065 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-284zz_calico-system(15d6d9af-5bd0-4d52-a244-b2ec483822b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-284zz_calico-system(15d6d9af-5bd0-4d52-a244-b2ec483822b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:02:00.734659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534-shm.mount: Deactivated successfully. Feb 13 10:02:01.135819 kubelet[1873]: E0213 10:02:01.135594 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:01.423606 kubelet[1873]: I0213 10:02:01.423563 1873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:02:01.423923 env[1473]: time="2024-02-13T10:02:01.423855936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 13 10:02:01.424743 env[1473]: time="2024-02-13T10:02:01.424686751Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:02:01.442223 env[1473]: time="2024-02-13T10:02:01.442162433Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:01.442356 kubelet[1873]: E0213 10:02:01.442345 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:02:01.442399 kubelet[1873]: E0213 10:02:01.442387 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:02:01.442423 kubelet[1873]: E0213 10:02:01.442410 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:01.442469 kubelet[1873]: E0213 10:02:01.442429 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:02:02.136101 kubelet[1873]: E0213 10:02:02.135995 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:03.136364 kubelet[1873]: E0213 10:02:03.136258 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:04.137538 kubelet[1873]: E0213 10:02:04.137468 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:04.144012 kubelet[1873]: I0213 10:02:04.143886 1873 topology_manager.go:210] "Topology Admit Handler" Feb 13 10:02:04.157536 systemd[1]: Created slice kubepods-besteffort-podda6a3b0d_4f2e_49b1_a2b6_346cad162ffb.slice. Feb 13 10:02:04.233724 kubelet[1873]: I0213 10:02:04.233624 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnn5\" (UniqueName: \"kubernetes.io/projected/da6a3b0d-4f2e-49b1-a2b6-346cad162ffb-kube-api-access-9nnn5\") pod \"nginx-deployment-8ffc5cf85-lzm4g\" (UID: \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\") " pod="default/nginx-deployment-8ffc5cf85-lzm4g" Feb 13 10:02:04.463909 env[1473]: time="2024-02-13T10:02:04.463774556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-lzm4g,Uid:da6a3b0d-4f2e-49b1-a2b6-346cad162ffb,Namespace:default,Attempt:0,}" Feb 13 10:02:04.503623 env[1473]: time="2024-02-13T10:02:04.503581805Z" level=error msg="Failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:04.503835 env[1473]: time="2024-02-13T10:02:04.503815574Z" level=error msg="encountered an error cleaning up failed sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:04.503869 env[1473]: time="2024-02-13T10:02:04.503851775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-lzm4g,Uid:da6a3b0d-4f2e-49b1-a2b6-346cad162ffb,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:04.504016 kubelet[1873]: E0213 10:02:04.504002 1873 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:04.504065 kubelet[1873]: E0213 10:02:04.504041 1873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-lzm4g" Feb 13 10:02:04.504065 kubelet[1873]: E0213 10:02:04.504058 1873 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-lzm4g" Feb 13 10:02:04.504122 kubelet[1873]: E0213 10:02:04.504097 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-lzm4g_default(da6a3b0d-4f2e-49b1-a2b6-346cad162ffb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-lzm4g_default(da6a3b0d-4f2e-49b1-a2b6-346cad162ffb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:02:04.504743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af-shm.mount: Deactivated successfully. Feb 13 10:02:05.138874 kubelet[1873]: E0213 10:02:05.138798 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:05.434367 kubelet[1873]: I0213 10:02:05.434307 1873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:02:05.435391 env[1473]: time="2024-02-13T10:02:05.435291423Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:02:05.461599 env[1473]: time="2024-02-13T10:02:05.461560505Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:05.461753 kubelet[1873]: E0213 10:02:05.461707 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:02:05.461753 kubelet[1873]: E0213 10:02:05.461731 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:02:05.461753 kubelet[1873]: E0213 10:02:05.461753 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:05.461938 kubelet[1873]: E0213 10:02:05.461770 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:02:06.140038 kubelet[1873]: E0213 10:02:06.139928 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:07.140257 kubelet[1873]: E0213 10:02:07.140154 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:08.140970 kubelet[1873]: E0213 10:02:08.140858 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:09.101598 kubelet[1873]: E0213 10:02:09.101495 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:09.141490 kubelet[1873]: E0213 10:02:09.141362 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:10.142820 kubelet[1873]: E0213 10:02:10.142712 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:11.143700 kubelet[1873]: E0213 10:02:11.143589 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:12.144193 kubelet[1873]: E0213 10:02:12.144083 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:13.144429 kubelet[1873]: E0213 10:02:13.144318 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:14.145407 kubelet[1873]: E0213 10:02:14.145283 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:14.285055 env[1473]: time="2024-02-13T10:02:14.284911973Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:02:14.311227 env[1473]: time="2024-02-13T10:02:14.311168502Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:14.311344 kubelet[1873]: E0213 10:02:14.311333 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:02:14.311378 kubelet[1873]: E0213 10:02:14.311358 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:02:14.311404 kubelet[1873]: E0213 10:02:14.311387 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:14.311451 kubelet[1873]: E0213 10:02:14.311404 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:02:15.146115 kubelet[1873]: E0213 10:02:15.145993 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:16.147348 kubelet[1873]: E0213 10:02:16.147240 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:17.148393 kubelet[1873]: E0213 10:02:17.148299 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:17.284731 env[1473]: time="2024-02-13T10:02:17.284608135Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:02:17.337918 env[1473]: time="2024-02-13T10:02:17.337818606Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:17.338118 kubelet[1873]: E0213 10:02:17.338092 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:02:17.338213 kubelet[1873]: E0213 10:02:17.338138 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:02:17.338213 kubelet[1873]: E0213 10:02:17.338191 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:17.338367 kubelet[1873]: E0213 10:02:17.338234 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:02:18.149006 kubelet[1873]: E0213 10:02:18.148895 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:19.149191 kubelet[1873]: E0213 10:02:19.149081 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:20.149971 kubelet[1873]: E0213 10:02:20.149865 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:21.151064 kubelet[1873]: E0213 10:02:21.150992 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:22.151756 kubelet[1873]: E0213 10:02:22.151676 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:23.152797 kubelet[1873]: E0213 10:02:23.152725 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:24.153794 kubelet[1873]: E0213 10:02:24.153719 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:25.154191 kubelet[1873]: E0213 10:02:25.154087 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:25.284761 env[1473]: time="2024-02-13T10:02:25.284624012Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:02:25.299355 env[1473]: time="2024-02-13T10:02:25.299317590Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:25.299525 kubelet[1873]: E0213 10:02:25.299510 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:02:25.299577 kubelet[1873]: E0213 10:02:25.299541 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:02:25.299577 kubelet[1873]: E0213 10:02:25.299567 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:25.299658 kubelet[1873]: E0213 10:02:25.299589 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:02:26.154713 kubelet[1873]: E0213 10:02:26.154641 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:27.154986 kubelet[1873]: E0213 10:02:27.154877 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:28.155360 kubelet[1873]: E0213 10:02:28.155286 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:29.101785 kubelet[1873]: E0213 10:02:29.101718 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:29.155537 kubelet[1873]: E0213 10:02:29.155468 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:29.285830 env[1473]: time="2024-02-13T10:02:29.285727011Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:02:29.301686 env[1473]: time="2024-02-13T10:02:29.301645867Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:29.301855 kubelet[1873]: E0213 10:02:29.301809 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:02:29.301855 kubelet[1873]: E0213 10:02:29.301833 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:02:29.301855 kubelet[1873]: E0213 10:02:29.301854 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:29.301976 kubelet[1873]: E0213 10:02:29.301873 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:02:30.156301 kubelet[1873]: E0213 10:02:30.156188 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:31.156825 kubelet[1873]: E0213 10:02:31.156713 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:32.157503 kubelet[1873]: E0213 10:02:32.157401 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:33.158148 kubelet[1873]: E0213 10:02:33.158041 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:34.159094 kubelet[1873]: E0213 10:02:34.158987 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:35.159234 kubelet[1873]: E0213 10:02:35.159125 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:36.160334 kubelet[1873]: E0213 10:02:36.160217 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:36.284587 env[1473]: time="2024-02-13T10:02:36.284459970Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:02:36.311645 env[1473]: time="2024-02-13T10:02:36.311582786Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:36.311866 kubelet[1873]: E0213 10:02:36.311812 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:02:36.311912 kubelet[1873]: E0213 10:02:36.311870 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:02:36.311912 kubelet[1873]: E0213 10:02:36.311891 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:36.311912 kubelet[1873]: E0213 10:02:36.311908 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:02:37.161340 kubelet[1873]: E0213 10:02:37.161232 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:38.161636 kubelet[1873]: E0213 10:02:38.161508 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:39.162514 kubelet[1873]: E0213 10:02:39.162414 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:40.163479 kubelet[1873]: E0213 10:02:40.163358 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:40.284559 env[1473]: time="2024-02-13T10:02:40.284436164Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:02:40.313682 env[1473]: time="2024-02-13T10:02:40.313621225Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:40.313812 kubelet[1873]: E0213 10:02:40.313774 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:02:40.313812 kubelet[1873]: E0213 10:02:40.313799 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:02:40.313878 kubelet[1873]: E0213 10:02:40.313826 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:40.313878 kubelet[1873]: E0213 10:02:40.313843 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:02:41.164633 kubelet[1873]: E0213 10:02:41.164521 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:42.165299 kubelet[1873]: E0213 10:02:42.165191 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:43.166406 kubelet[1873]: E0213 10:02:43.166285 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:44.166749 kubelet[1873]: E0213 10:02:44.166638 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:45.166957 kubelet[1873]: E0213 10:02:45.166850 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:46.167836 kubelet[1873]: E0213 10:02:46.167715 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:47.168003 kubelet[1873]: E0213 10:02:47.167897 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:48.168817 kubelet[1873]: E0213 10:02:48.168700 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:48.285518 env[1473]: time="2024-02-13T10:02:48.285357680Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:02:48.314091 env[1473]: time="2024-02-13T10:02:48.314002041Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:48.314214 kubelet[1873]: E0213 10:02:48.314202 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:02:48.314246 kubelet[1873]: E0213 10:02:48.314229 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:02:48.314265 kubelet[1873]: E0213 10:02:48.314251 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:48.314306 kubelet[1873]: E0213 10:02:48.314267 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:02:49.101279 kubelet[1873]: E0213 10:02:49.101161 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:49.169073 kubelet[1873]: E0213 10:02:49.168962 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:50.169328 kubelet[1873]: E0213 10:02:50.169213 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:51.170181 kubelet[1873]: E0213 10:02:51.170076 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:52.171294 kubelet[1873]: E0213 10:02:52.171183 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:53.171618 kubelet[1873]: E0213 10:02:53.171505 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:53.285268 env[1473]: time="2024-02-13T10:02:53.285133435Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:02:53.312052 env[1473]: time="2024-02-13T10:02:53.311986078Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:02:53.312202 kubelet[1873]: E0213 10:02:53.312166 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:02:53.312202 kubelet[1873]: E0213 10:02:53.312193 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:02:53.312255 kubelet[1873]: E0213 10:02:53.312214 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:02:53.312255 kubelet[1873]: E0213 10:02:53.312232 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:02:54.172643 kubelet[1873]: E0213 10:02:54.172542 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:55.173016 kubelet[1873]: E0213 10:02:55.172907 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:56.174151 kubelet[1873]: E0213 10:02:56.174048 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:57.174419 kubelet[1873]: E0213 10:02:57.174312 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:58.175464 kubelet[1873]: E0213 10:02:58.175343 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:02:59.176276 kubelet[1873]: E0213 10:02:59.176158 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:00.177320 kubelet[1873]: E0213 10:03:00.177197 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:01.178355 kubelet[1873]: E0213 10:03:01.178245 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:02.179643 kubelet[1873]: E0213 10:03:02.179570 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:02.285613 env[1473]: time="2024-02-13T10:03:02.285471738Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:03:02.311609 env[1473]: time="2024-02-13T10:03:02.311527037Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:02.311798 kubelet[1873]: E0213 10:03:02.311786 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:03:02.311830 kubelet[1873]: E0213 10:03:02.311811 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:03:02.311850 kubelet[1873]: E0213 10:03:02.311834 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:02.311905 kubelet[1873]: E0213 10:03:02.311852 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:03:03.180735 kubelet[1873]: E0213 10:03:03.180618 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:04.181522 kubelet[1873]: E0213 10:03:04.181449 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:05.181899 kubelet[1873]: E0213 10:03:05.181789 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:05.284866 env[1473]: time="2024-02-13T10:03:05.284744681Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:03:05.310806 env[1473]: time="2024-02-13T10:03:05.310744517Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:05.310897 kubelet[1873]: E0213 10:03:05.310876 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:03:05.310931 kubelet[1873]: E0213 10:03:05.310900 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:03:05.310931 kubelet[1873]: E0213 10:03:05.310922 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:05.310994 kubelet[1873]: E0213 10:03:05.310939 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:03:06.183137 kubelet[1873]: E0213 10:03:06.183029 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:07.184189 kubelet[1873]: E0213 10:03:07.184071 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:08.185315 kubelet[1873]: E0213 10:03:08.185207 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:09.100882 kubelet[1873]: E0213 10:03:09.100775 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:09.185948 kubelet[1873]: E0213 10:03:09.185838 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:10.186772 kubelet[1873]: E0213 10:03:10.186661 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:11.187433 kubelet[1873]: E0213 10:03:11.187313 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:12.188355 kubelet[1873]: E0213 10:03:12.188276 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:13.189407 kubelet[1873]: E0213 10:03:13.189318 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:13.285300 env[1473]: time="2024-02-13T10:03:13.285199386Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:03:13.315279 env[1473]: time="2024-02-13T10:03:13.315225131Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:13.315515 kubelet[1873]: E0213 10:03:13.315458 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:03:13.315515 kubelet[1873]: E0213 10:03:13.315496 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:03:13.315515 kubelet[1873]: E0213 10:03:13.315516 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:13.315634 kubelet[1873]: E0213 10:03:13.315534 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:03:14.190344 kubelet[1873]: E0213 10:03:14.190232 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:15.191001 kubelet[1873]: E0213 10:03:15.190894 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:16.191213 kubelet[1873]: E0213 10:03:16.191084 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:17.192242 kubelet[1873]: E0213 10:03:17.192124 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:18.193346 kubelet[1873]: E0213 10:03:18.193229 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:19.194515 kubelet[1873]: E0213 10:03:19.194405 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:20.195440 kubelet[1873]: E0213 10:03:20.195322 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:20.285527 env[1473]: time="2024-02-13T10:03:20.285397632Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:03:20.339496 env[1473]: time="2024-02-13T10:03:20.339439582Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:20.339699 kubelet[1873]: E0213 10:03:20.339651 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:03:20.339699 kubelet[1873]: E0213 10:03:20.339690 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:03:20.339853 kubelet[1873]: E0213 10:03:20.339735 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:20.339853 kubelet[1873]: E0213 10:03:20.339768 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:03:21.196806 kubelet[1873]: E0213 10:03:21.196698 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:22.197103 kubelet[1873]: E0213 10:03:22.196983 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:23.197571 kubelet[1873]: E0213 10:03:23.197465 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:24.198524 kubelet[1873]: E0213 10:03:24.198419 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:25.199464 kubelet[1873]: E0213 10:03:25.199340 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:25.285441 env[1473]: time="2024-02-13T10:03:25.285331848Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:03:25.311798 env[1473]: time="2024-02-13T10:03:25.311741626Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:25.311930 kubelet[1873]: E0213 10:03:25.311916 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:03:25.311983 kubelet[1873]: E0213 10:03:25.311949 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:03:25.311983 kubelet[1873]: E0213 10:03:25.311982 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:25.312065 kubelet[1873]: E0213 10:03:25.312009 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:03:26.200293 kubelet[1873]: E0213 10:03:26.200220 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:27.201126 kubelet[1873]: E0213 10:03:27.201049 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:28.202259 kubelet[1873]: E0213 10:03:28.202185 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:29.100644 kubelet[1873]: E0213 10:03:29.100550 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:29.202544 kubelet[1873]: E0213 10:03:29.202418 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:30.203185 kubelet[1873]: E0213 10:03:30.203065 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:31.203946 kubelet[1873]: E0213 10:03:31.203828 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:31.285702 env[1473]: time="2024-02-13T10:03:31.285574076Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:03:31.315362 env[1473]: time="2024-02-13T10:03:31.315276363Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:31.315508 kubelet[1873]: E0213 10:03:31.315497 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:03:31.315556 kubelet[1873]: E0213 10:03:31.315521 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:03:31.315556 kubelet[1873]: E0213 10:03:31.315555 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:31.315619 kubelet[1873]: E0213 10:03:31.315573 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:03:32.204578 kubelet[1873]: E0213 10:03:32.204458 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:33.204916 kubelet[1873]: E0213 10:03:33.204793 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:34.205722 kubelet[1873]: E0213 10:03:34.205613 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:35.206941 kubelet[1873]: E0213 10:03:35.206826 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:36.207409 kubelet[1873]: E0213 10:03:36.207270 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:37.207578 kubelet[1873]: E0213 10:03:37.207510 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:38.207806 kubelet[1873]: E0213 10:03:38.207732 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:39.208861 kubelet[1873]: E0213 10:03:39.208740 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:40.209358 kubelet[1873]: E0213 10:03:40.209238 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:40.285397 env[1473]: time="2024-02-13T10:03:40.285281340Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:03:40.311970 env[1473]: time="2024-02-13T10:03:40.311937591Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:40.312131 kubelet[1873]: E0213 10:03:40.312092 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:03:40.312131 kubelet[1873]: E0213 10:03:40.312116 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:03:40.312198 kubelet[1873]: E0213 10:03:40.312139 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:40.312198 kubelet[1873]: E0213 10:03:40.312156 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:03:41.209553 kubelet[1873]: E0213 10:03:41.209445 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:42.210351 kubelet[1873]: E0213 10:03:42.210249 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:43.211153 kubelet[1873]: E0213 10:03:43.211039 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:43.284916 env[1473]: time="2024-02-13T10:03:43.284787013Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:03:43.299505 env[1473]: time="2024-02-13T10:03:43.299440636Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:43.299610 kubelet[1873]: E0213 10:03:43.299596 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:03:43.299649 kubelet[1873]: E0213 10:03:43.299622 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:03:43.299649 kubelet[1873]: E0213 10:03:43.299646 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:43.299721 kubelet[1873]: E0213 10:03:43.299665 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:03:44.212314 kubelet[1873]: E0213 10:03:44.212204 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:45.212926 kubelet[1873]: E0213 10:03:45.212856 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:46.213652 kubelet[1873]: E0213 10:03:46.213544 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:47.214054 kubelet[1873]: E0213 10:03:47.213938 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:48.214776 kubelet[1873]: E0213 10:03:48.214660 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:49.100705 kubelet[1873]: E0213 10:03:49.100598 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:49.215059 kubelet[1873]: E0213 10:03:49.214948 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:50.216201 kubelet[1873]: E0213 10:03:50.216084 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:51.216422 kubelet[1873]: E0213 10:03:51.216310 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:52.216669 kubelet[1873]: E0213 10:03:52.216559 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:53.217463 kubelet[1873]: E0213 10:03:53.217342 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:53.285287 env[1473]: time="2024-02-13T10:03:53.285176880Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:03:53.311751 env[1473]: time="2024-02-13T10:03:53.311718359Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:53.311883 kubelet[1873]: E0213 10:03:53.311873 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:03:53.311915 kubelet[1873]: E0213 10:03:53.311900 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:03:53.311935 kubelet[1873]: E0213 10:03:53.311919 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:53.311979 kubelet[1873]: E0213 10:03:53.311939 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:03:54.218670 kubelet[1873]: E0213 10:03:54.218471 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:55.219351 kubelet[1873]: E0213 10:03:55.219237 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:56.220117 kubelet[1873]: E0213 10:03:56.219993 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:56.284797 env[1473]: time="2024-02-13T10:03:56.284642766Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:03:56.310879 env[1473]: time="2024-02-13T10:03:56.310816877Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:03:56.310993 kubelet[1873]: E0213 10:03:56.310983 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:03:56.311035 kubelet[1873]: E0213 10:03:56.311005 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:03:56.311035 kubelet[1873]: E0213 10:03:56.311026 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:03:56.311100 kubelet[1873]: E0213 10:03:56.311042 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:03:57.220353 kubelet[1873]: E0213 10:03:57.220291 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:58.220964 kubelet[1873]: E0213 10:03:58.220845 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:03:59.222097 kubelet[1873]: E0213 10:03:59.221986 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:00.223058 kubelet[1873]: E0213 10:04:00.222931 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:01.224013 kubelet[1873]: E0213 10:04:01.223899 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:02.224607 kubelet[1873]: E0213 10:04:02.224492 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:03.224743 kubelet[1873]: E0213 10:04:03.224638 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:04.224886 kubelet[1873]: E0213 10:04:04.224814 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:05.225210 kubelet[1873]: E0213 10:04:05.225098 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:06.226072 kubelet[1873]: E0213 10:04:06.225953 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:07.227239 kubelet[1873]: E0213 10:04:07.227127 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:08.228116 kubelet[1873]: E0213 10:04:08.228000 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:08.284634 env[1473]: time="2024-02-13T10:04:08.284501567Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:04:08.311147 env[1473]: time="2024-02-13T10:04:08.311080808Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:04:08.311310 kubelet[1873]: E0213 10:04:08.311300 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:04:08.311346 kubelet[1873]: E0213 10:04:08.311326 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:04:08.311368 kubelet[1873]: E0213 10:04:08.311348 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:04:08.311368 kubelet[1873]: E0213 10:04:08.311366 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:04:09.101587 kubelet[1873]: E0213 10:04:09.101509 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:09.228393 kubelet[1873]: E0213 10:04:09.228266 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:10.229342 kubelet[1873]: E0213 10:04:10.229230 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:10.285033 env[1473]: time="2024-02-13T10:04:10.284934916Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:04:10.311886 env[1473]: time="2024-02-13T10:04:10.311823594Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:04:10.312012 kubelet[1873]: E0213 10:04:10.311990 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:04:10.312044 kubelet[1873]: E0213 10:04:10.312015 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:04:10.312044 kubelet[1873]: E0213 10:04:10.312035 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:04:10.312118 kubelet[1873]: E0213 10:04:10.312052 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:04:11.229984 kubelet[1873]: E0213 10:04:11.229871 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:12.231228 kubelet[1873]: E0213 10:04:12.231109 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:13.232068 kubelet[1873]: E0213 10:04:13.231957 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:14.232775 kubelet[1873]: E0213 10:04:14.232663 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:15.232956 kubelet[1873]: E0213 10:04:15.232906 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:16.233416 kubelet[1873]: E0213 10:04:16.233287 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:17.234757 kubelet[1873]: E0213 10:04:17.234642 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:18.235229 kubelet[1873]: E0213 10:04:18.235110 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:19.235683 kubelet[1873]: E0213 10:04:19.235581 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:20.236265 kubelet[1873]: E0213 10:04:20.236195 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:21.236761 kubelet[1873]: E0213 10:04:21.236646 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:21.284931 env[1473]: time="2024-02-13T10:04:21.284797067Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:04:21.313971 env[1473]: time="2024-02-13T10:04:21.313912652Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:04:21.314124 kubelet[1873]: E0213 10:04:21.314103 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:04:21.314174 kubelet[1873]: E0213 10:04:21.314133 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:04:21.314174 kubelet[1873]: E0213 10:04:21.314165 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:04:21.314250 kubelet[1873]: E0213 10:04:21.314189 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:04:22.236865 kubelet[1873]: E0213 10:04:22.236788 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:23.237423 kubelet[1873]: E0213 10:04:23.237310 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:23.285057 env[1473]: time="2024-02-13T10:04:23.284931414Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:04:23.314366 env[1473]: time="2024-02-13T10:04:23.314308007Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:04:23.314515 kubelet[1873]: E0213 10:04:23.314503 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:04:23.314552 kubelet[1873]: E0213 10:04:23.314529 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:04:23.314573 kubelet[1873]: E0213 10:04:23.314555 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:04:23.314573 kubelet[1873]: E0213 10:04:23.314573 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:04:24.237897 kubelet[1873]: E0213 10:04:24.237828 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:25.239091 kubelet[1873]: E0213 10:04:25.239022 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:26.239264 kubelet[1873]: E0213 10:04:26.239159 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:27.239495 kubelet[1873]: E0213 10:04:27.239418 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:28.239705 kubelet[1873]: E0213 10:04:28.239626 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:29.101430 kubelet[1873]: E0213 10:04:29.101265 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:29.240740 kubelet[1873]: E0213 10:04:29.240668 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:30.241212 kubelet[1873]: E0213 10:04:30.241141 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:31.241501 kubelet[1873]: E0213 10:04:31.241424 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:32.242666 kubelet[1873]: E0213 10:04:32.242562 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:33.243500 kubelet[1873]: E0213 10:04:33.243398 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:34.244160 kubelet[1873]: E0213 10:04:34.244092 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:34.284643 env[1473]: time="2024-02-13T10:04:34.284548197Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:04:34.284643 env[1473]: time="2024-02-13T10:04:34.284616925Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:04:34.314621 env[1473]: time="2024-02-13T10:04:34.314584886Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:04:34.314719 env[1473]: time="2024-02-13T10:04:34.314584925Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:04:34.314760 kubelet[1873]: E0213 10:04:34.314734 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:04:34.314807 kubelet[1873]: E0213 10:04:34.314764 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:04:34.314807 kubelet[1873]: E0213 10:04:34.314797 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:04:34.314886 kubelet[1873]: E0213 10:04:34.314737 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:04:34.314886 kubelet[1873]: E0213 10:04:34.314825 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:04:34.314886 kubelet[1873]: E0213 10:04:34.314825 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:04:34.314886 kubelet[1873]: E0213 10:04:34.314844 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:04:34.314988 kubelet[1873]: E0213 10:04:34.314859 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:04:35.245290 kubelet[1873]: E0213 10:04:35.245219 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:36.245498 kubelet[1873]: E0213 10:04:36.245424 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:37.246286 kubelet[1873]: E0213 10:04:37.246209 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:38.247509 kubelet[1873]: E0213 10:04:38.247391 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:39.247762 kubelet[1873]: E0213 10:04:39.247649 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:40.248432 kubelet[1873]: E0213 10:04:40.248254 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:41.249189 kubelet[1873]: E0213 10:04:41.249073 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:42.249980 kubelet[1873]: E0213 10:04:42.249862 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:43.250912 kubelet[1873]: E0213 10:04:43.250801 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:44.251148 kubelet[1873]: E0213 10:04:44.251023 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:45.252172 kubelet[1873]: E0213 10:04:45.252053 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:46.252972 kubelet[1873]: E0213 10:04:46.252852 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:47.253269 kubelet[1873]: E0213 10:04:47.253155 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:47.285498 env[1473]: time="2024-02-13T10:04:47.285402577Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:04:47.286638 env[1473]: time="2024-02-13T10:04:47.285594892Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:04:47.311919 env[1473]: time="2024-02-13T10:04:47.311885789Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:04:47.312098 kubelet[1873]: E0213 10:04:47.312086 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:04:47.312136 kubelet[1873]: E0213 10:04:47.312119 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:04:47.312157 kubelet[1873]: E0213 10:04:47.312150 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:04:47.312230 kubelet[1873]: E0213 10:04:47.312173 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:04:47.312280 env[1473]: time="2024-02-13T10:04:47.312229892Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:04:47.312309 kubelet[1873]: E0213 10:04:47.312304 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:04:47.312332 kubelet[1873]: E0213 10:04:47.312313 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:04:47.312332 kubelet[1873]: E0213 10:04:47.312330 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:04:47.312387 kubelet[1873]: E0213 10:04:47.312343 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:04:48.254012 kubelet[1873]: E0213 10:04:48.253895 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:49.101357 kubelet[1873]: E0213 10:04:49.101237 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:49.254832 kubelet[1873]: E0213 10:04:49.254710 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:50.255865 kubelet[1873]: E0213 10:04:50.255751 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:51.256706 kubelet[1873]: E0213 10:04:51.256588 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:52.257813 kubelet[1873]: E0213 10:04:52.257694 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:53.258420 kubelet[1873]: E0213 10:04:53.258292 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:54.258682 kubelet[1873]: E0213 10:04:54.258562 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:55.259773 kubelet[1873]: E0213 10:04:55.259697 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:56.260825 kubelet[1873]: E0213 10:04:56.260712 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:57.262039 kubelet[1873]: E0213 10:04:57.261916 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:58.262280 kubelet[1873]: E0213 10:04:58.262150 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:04:59.263111 kubelet[1873]: E0213 10:04:59.262994 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:00.263936 kubelet[1873]: E0213 10:05:00.263815 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:00.285122 env[1473]: time="2024-02-13T10:05:00.284970562Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:05:00.314231 env[1473]: time="2024-02-13T10:05:00.314178328Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:00.314378 kubelet[1873]: E0213 10:05:00.314363 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:05:00.314472 kubelet[1873]: E0213 10:05:00.314412 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:05:00.314507 kubelet[1873]: E0213 10:05:00.314472 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:00.314507 kubelet[1873]: E0213 10:05:00.314499 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:05:01.264244 kubelet[1873]: E0213 10:05:01.264117 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:02.265336 kubelet[1873]: E0213 10:05:02.265263 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:02.285540 env[1473]: time="2024-02-13T10:05:02.285449393Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:05:02.314690 env[1473]: time="2024-02-13T10:05:02.314598743Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:02.314837 kubelet[1873]: E0213 10:05:02.314821 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:05:02.314895 kubelet[1873]: E0213 10:05:02.314851 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:05:02.314926 kubelet[1873]: E0213 10:05:02.314902 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:02.314975 kubelet[1873]: E0213 10:05:02.314927 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:05:03.265554 kubelet[1873]: E0213 10:05:03.265437 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:04.266271 kubelet[1873]: E0213 10:05:04.266160 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:05.266411 kubelet[1873]: E0213 10:05:05.266302 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:06.267269 kubelet[1873]: E0213 10:05:06.267148 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:07.268206 kubelet[1873]: E0213 10:05:07.268136 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:08.269122 kubelet[1873]: E0213 10:05:08.268997 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:09.100677 kubelet[1873]: E0213 10:05:09.100576 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:09.269921 kubelet[1873]: E0213 10:05:09.269810 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:09.870727 kubelet[1873]: I0213 10:05:09.870665 1873 topology_manager.go:210] "Topology Admit Handler" Feb 13 10:05:09.883435 systemd[1]: Created slice kubepods-besteffort-pod36932f06_7df4_41dc_9f83_abe5596dbe2f.slice. Feb 13 10:05:09.917348 kubelet[1873]: I0213 10:05:09.917254 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvlch\" (UniqueName: \"kubernetes.io/projected/36932f06-7df4-41dc-9f83-abe5596dbe2f-kube-api-access-rvlch\") pod \"nfs-server-provisioner-0\" (UID: \"36932f06-7df4-41dc-9f83-abe5596dbe2f\") " pod="default/nfs-server-provisioner-0" Feb 13 10:05:09.917580 kubelet[1873]: I0213 10:05:09.917405 1873 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/36932f06-7df4-41dc-9f83-abe5596dbe2f-data\") pod \"nfs-server-provisioner-0\" (UID: \"36932f06-7df4-41dc-9f83-abe5596dbe2f\") " pod="default/nfs-server-provisioner-0" Feb 13 10:05:09.935000 audit[3444]: NETFILTER_CFG table=filter:79 family=2 entries=24 op=nft_register_rule pid=3444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:05:09.965449 kernel: kauditd_printk_skb: 34 callbacks suppressed Feb 13 10:05:09.965514 kernel: audit: type=1325 audit(1707818709.935:680): table=filter:79 family=2 entries=24 op=nft_register_rule pid=3444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:05:09.935000 audit[3444]: SYSCALL arch=c000003e syscall=46 success=yes exit=12476 a0=3 a1=7ffdeb2cb490 a2=0 a3=7ffdeb2cb47c items=0 ppid=2202 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:05:10.123950 kernel: audit: type=1300 audit(1707818709.935:680): arch=c000003e syscall=46 success=yes exit=12476 a0=3 a1=7ffdeb2cb490 a2=0 a3=7ffdeb2cb47c items=0 ppid=2202 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:05:10.123984 kernel: audit: type=1327 audit(1707818709.935:680): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:05:09.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:05:09.938000 audit[3444]: NETFILTER_CFG table=nat:80 family=2 entries=30 op=nft_register_rule pid=3444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:05:10.188913 env[1473]: time="2024-02-13T10:05:10.188860487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:36932f06-7df4-41dc-9f83-abe5596dbe2f,Namespace:default,Attempt:0,}" Feb 13 10:05:09.938000 audit[3444]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffdeb2cb490 a2=0 a3=31030 items=0 ppid=2202 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:05:10.270426 kubelet[1873]: E0213 10:05:10.270359 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:10.344983 kernel: audit: type=1325 audit(1707818709.938:681): table=nat:80 family=2 entries=30 op=nft_register_rule pid=3444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:05:10.345015 kernel: audit: type=1300 audit(1707818709.938:681): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffdeb2cb490 a2=0 a3=31030 items=0 ppid=2202 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:05:10.345031 kernel: audit: type=1327 audit(1707818709.938:681): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:05:09.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:05:10.355678 env[1473]: time="2024-02-13T10:05:10.355616969Z" level=error msg="Failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:10.356514 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f-shm.mount: Deactivated successfully. Feb 13 10:05:10.403768 env[1473]: time="2024-02-13T10:05:10.403720770Z" level=error msg="encountered an error cleaning up failed sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:10.403768 env[1473]: time="2024-02-13T10:05:10.403753927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:36932f06-7df4-41dc-9f83-abe5596dbe2f,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:10.403903 kubelet[1873]: E0213 10:05:10.403869 1873 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:10.403903 kubelet[1873]: E0213 10:05:10.403901 1873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nfs-server-provisioner-0" Feb 13 10:05:10.403968 kubelet[1873]: E0213 10:05:10.403914 1873 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nfs-server-provisioner-0" Feb 13 10:05:10.403968 kubelet[1873]: E0213 10:05:10.403944 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nfs-server-provisioner-0_default(36932f06-7df4-41dc-9f83-abe5596dbe2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nfs-server-provisioner-0_default(36932f06-7df4-41dc-9f83-abe5596dbe2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:05:10.344000 audit[3497]: NETFILTER_CFG table=filter:81 family=2 entries=36 op=nft_register_rule pid=3497 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:05:10.344000 audit[3497]: SYSCALL arch=c000003e syscall=46 success=yes exit=12476 a0=3 a1=7ffc031a2a70 a2=0 a3=7ffc031a2a5c items=0 ppid=2202 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:05:10.565289 kernel: audit: type=1325 audit(1707818710.344:682): table=filter:81 family=2 entries=36 op=nft_register_rule pid=3497 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:05:10.565321 kernel: audit: type=1300 audit(1707818710.344:682): arch=c000003e syscall=46 success=yes exit=12476 a0=3 a1=7ffc031a2a70 a2=0 a3=7ffc031a2a5c items=0 ppid=2202 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:05:10.565339 kernel: audit: type=1327 audit(1707818710.344:682): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:05:10.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:05:10.624000 audit[3497]: NETFILTER_CFG table=nat:82 family=2 entries=30 op=nft_register_rule pid=3497 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:05:10.624000 audit[3497]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffc031a2a70 a2=0 a3=31030 items=0 ppid=2202 pid=3497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 10:05:10.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 10:05:10.685577 kernel: audit: type=1325 audit(1707818710.624:683): table=nat:82 family=2 entries=30 op=nft_register_rule pid=3497 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 10:05:10.899655 kubelet[1873]: I0213 10:05:10.899550 1873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:05:10.900629 env[1473]: time="2024-02-13T10:05:10.900515449Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:05:10.926837 env[1473]: time="2024-02-13T10:05:10.926773699Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:10.927086 kubelet[1873]: E0213 10:05:10.927050 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:05:10.927086 kubelet[1873]: E0213 10:05:10.927072 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:05:10.927137 kubelet[1873]: E0213 10:05:10.927095 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:10.927137 kubelet[1873]: E0213 10:05:10.927112 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:05:11.271553 kubelet[1873]: E0213 10:05:11.271482 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:12.272030 kubelet[1873]: E0213 10:05:12.271921 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:13.272806 kubelet[1873]: E0213 10:05:13.272686 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:14.273918 kubelet[1873]: E0213 10:05:14.273812 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:14.284631 env[1473]: time="2024-02-13T10:05:14.284538982Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:05:14.310656 env[1473]: time="2024-02-13T10:05:14.310623034Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:14.310818 kubelet[1873]: E0213 10:05:14.310788 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:05:14.310818 kubelet[1873]: E0213 10:05:14.310813 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:05:14.310885 kubelet[1873]: E0213 10:05:14.310834 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:14.310885 kubelet[1873]: E0213 10:05:14.310851 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:05:15.274143 kubelet[1873]: E0213 10:05:15.274039 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:15.284842 env[1473]: time="2024-02-13T10:05:15.284755287Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:05:15.313661 env[1473]: time="2024-02-13T10:05:15.313627144Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:15.313851 kubelet[1873]: E0213 10:05:15.313840 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:05:15.313883 kubelet[1873]: E0213 10:05:15.313867 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:05:15.313905 kubelet[1873]: E0213 10:05:15.313891 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:15.313946 kubelet[1873]: E0213 10:05:15.313908 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:05:16.274343 kubelet[1873]: E0213 10:05:16.274233 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:17.274520 kubelet[1873]: E0213 10:05:17.274390 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:18.275291 kubelet[1873]: E0213 10:05:18.275169 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:19.276149 kubelet[1873]: E0213 10:05:19.276044 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:20.276649 kubelet[1873]: E0213 10:05:20.276526 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:21.277641 kubelet[1873]: E0213 10:05:21.277525 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:22.278570 kubelet[1873]: E0213 10:05:22.278462 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:23.279448 kubelet[1873]: E0213 10:05:23.279332 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:24.279838 kubelet[1873]: E0213 10:05:24.279643 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:25.280682 kubelet[1873]: E0213 10:05:25.280572 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:25.285402 env[1473]: time="2024-02-13T10:05:25.285297070Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:05:25.314835 env[1473]: time="2024-02-13T10:05:25.314773969Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:25.315014 kubelet[1873]: E0213 10:05:25.315003 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:05:25.315045 kubelet[1873]: E0213 10:05:25.315030 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:05:25.315063 kubelet[1873]: E0213 10:05:25.315051 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:25.315102 kubelet[1873]: E0213 10:05:25.315069 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:05:26.280880 kubelet[1873]: E0213 10:05:26.280764 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:26.285454 env[1473]: time="2024-02-13T10:05:26.285314568Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:05:26.315003 env[1473]: time="2024-02-13T10:05:26.314972661Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:26.315181 kubelet[1873]: E0213 10:05:26.315146 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:05:26.315181 kubelet[1873]: E0213 10:05:26.315170 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:05:26.315235 kubelet[1873]: E0213 10:05:26.315192 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:26.315235 kubelet[1873]: E0213 10:05:26.315208 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:05:27.281240 kubelet[1873]: E0213 10:05:27.281126 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:28.282303 kubelet[1873]: E0213 10:05:28.282185 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:29.100741 kubelet[1873]: E0213 10:05:29.100637 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:29.282583 kubelet[1873]: E0213 10:05:29.282478 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:30.282822 kubelet[1873]: E0213 10:05:30.282722 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:30.285353 env[1473]: time="2024-02-13T10:05:30.285241825Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:05:30.314435 env[1473]: time="2024-02-13T10:05:30.314359419Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:30.314610 kubelet[1873]: E0213 10:05:30.314570 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:05:30.314610 kubelet[1873]: E0213 10:05:30.314593 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:05:30.314671 kubelet[1873]: E0213 10:05:30.314615 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:30.314671 kubelet[1873]: E0213 10:05:30.314632 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:05:31.283110 kubelet[1873]: E0213 10:05:31.282978 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:32.283674 kubelet[1873]: E0213 10:05:32.283564 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:33.284537 kubelet[1873]: E0213 10:05:33.284465 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:34.285420 kubelet[1873]: E0213 10:05:34.285256 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:35.286298 kubelet[1873]: E0213 10:05:35.286227 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:36.286748 kubelet[1873]: E0213 10:05:36.286633 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:37.285168 env[1473]: time="2024-02-13T10:05:37.285022892Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:05:37.287211 kubelet[1873]: E0213 10:05:37.287123 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:37.315017 env[1473]: time="2024-02-13T10:05:37.314952647Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:37.315152 kubelet[1873]: E0213 10:05:37.315119 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:05:37.315152 kubelet[1873]: E0213 10:05:37.315143 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:05:37.315213 kubelet[1873]: E0213 10:05:37.315166 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:37.315213 kubelet[1873]: E0213 10:05:37.315183 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:05:38.287839 kubelet[1873]: E0213 10:05:38.287717 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:39.285554 env[1473]: time="2024-02-13T10:05:39.285409259Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:05:39.287948 kubelet[1873]: E0213 10:05:39.287876 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:39.312324 env[1473]: time="2024-02-13T10:05:39.312266518Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:39.312508 kubelet[1873]: E0213 10:05:39.312483 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:05:39.312545 kubelet[1873]: E0213 10:05:39.312521 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:05:39.312545 kubelet[1873]: E0213 10:05:39.312544 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:39.312608 kubelet[1873]: E0213 10:05:39.312560 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:05:40.288105 kubelet[1873]: E0213 10:05:40.288023 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:41.288275 kubelet[1873]: E0213 10:05:41.288180 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:42.288536 kubelet[1873]: E0213 10:05:42.288429 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:43.288882 kubelet[1873]: E0213 10:05:43.288826 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:44.289786 kubelet[1873]: E0213 10:05:44.289670 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:45.285578 env[1473]: time="2024-02-13T10:05:45.285412308Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:05:45.290001 kubelet[1873]: E0213 10:05:45.289915 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:45.312199 env[1473]: time="2024-02-13T10:05:45.312126230Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:45.312340 kubelet[1873]: E0213 10:05:45.312330 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:05:45.312377 kubelet[1873]: E0213 10:05:45.312354 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:05:45.312425 kubelet[1873]: E0213 10:05:45.312380 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:45.312484 kubelet[1873]: E0213 10:05:45.312425 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:05:46.290838 kubelet[1873]: E0213 10:05:46.290717 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:47.291740 kubelet[1873]: E0213 10:05:47.291634 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:48.292314 kubelet[1873]: E0213 10:05:48.292208 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:49.101511 kubelet[1873]: E0213 10:05:49.101401 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:49.292900 kubelet[1873]: E0213 10:05:49.292854 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:50.293747 kubelet[1873]: E0213 10:05:50.293636 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:51.284798 env[1473]: time="2024-02-13T10:05:51.284685619Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:05:51.294602 kubelet[1873]: E0213 10:05:51.294584 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:51.297897 env[1473]: time="2024-02-13T10:05:51.297837535Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:51.298000 kubelet[1873]: E0213 10:05:51.297959 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:05:51.298000 kubelet[1873]: E0213 10:05:51.297982 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:05:51.298059 kubelet[1873]: E0213 10:05:51.298004 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:51.298059 kubelet[1873]: E0213 10:05:51.298021 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:05:52.284652 env[1473]: time="2024-02-13T10:05:52.284548596Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:05:52.295048 kubelet[1873]: E0213 10:05:52.294994 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:52.310760 env[1473]: time="2024-02-13T10:05:52.310722403Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:52.310974 kubelet[1873]: E0213 10:05:52.310890 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:05:52.310974 kubelet[1873]: E0213 10:05:52.310915 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:05:52.310974 kubelet[1873]: E0213 10:05:52.310935 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:52.310974 kubelet[1873]: E0213 10:05:52.310952 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:05:53.295337 kubelet[1873]: E0213 10:05:53.295228 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:54.295474 kubelet[1873]: E0213 10:05:54.295347 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:55.296283 kubelet[1873]: E0213 10:05:55.296184 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:56.297371 kubelet[1873]: E0213 10:05:56.297254 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:57.285106 env[1473]: time="2024-02-13T10:05:57.284946936Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:05:57.297888 kubelet[1873]: E0213 10:05:57.297801 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:57.311745 env[1473]: time="2024-02-13T10:05:57.311714284Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:05:57.311877 kubelet[1873]: E0213 10:05:57.311867 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:05:57.311916 kubelet[1873]: E0213 10:05:57.311892 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:05:57.311916 kubelet[1873]: E0213 10:05:57.311913 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:05:57.311981 kubelet[1873]: E0213 10:05:57.311932 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:05:58.298200 kubelet[1873]: E0213 10:05:58.298082 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:05:59.298361 kubelet[1873]: E0213 10:05:59.298294 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:00.299552 kubelet[1873]: E0213 10:06:00.299483 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:01.300808 kubelet[1873]: E0213 10:06:01.300690 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:02.285588 env[1473]: time="2024-02-13T10:06:02.285466712Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:06:02.301868 kubelet[1873]: E0213 10:06:02.301777 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:02.314728 env[1473]: time="2024-02-13T10:06:02.314676228Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:02.314892 kubelet[1873]: E0213 10:06:02.314882 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:06:02.314943 kubelet[1873]: E0213 10:06:02.314920 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:06:02.314943 kubelet[1873]: E0213 10:06:02.314939 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:02.315006 kubelet[1873]: E0213 10:06:02.314955 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:06:03.302768 kubelet[1873]: E0213 10:06:03.302663 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:04.304043 kubelet[1873]: E0213 10:06:04.303923 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:05.304561 kubelet[1873]: E0213 10:06:05.304453 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:06.304717 kubelet[1873]: E0213 10:06:06.304596 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:07.288599 env[1473]: time="2024-02-13T10:06:07.288469667Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:06:07.302463 env[1473]: time="2024-02-13T10:06:07.302426652Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:07.302601 kubelet[1873]: E0213 10:06:07.302590 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:06:07.302642 kubelet[1873]: E0213 10:06:07.302614 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:06:07.302642 kubelet[1873]: E0213 10:06:07.302636 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:07.302714 kubelet[1873]: E0213 10:06:07.302655 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:06:07.305730 kubelet[1873]: E0213 10:06:07.305721 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:08.306720 kubelet[1873]: E0213 10:06:08.306608 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:09.100884 kubelet[1873]: E0213 10:06:09.100776 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:09.285849 env[1473]: time="2024-02-13T10:06:09.285715204Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:06:09.307750 kubelet[1873]: E0213 10:06:09.307728 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:09.312317 env[1473]: time="2024-02-13T10:06:09.312290876Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:09.312417 kubelet[1873]: E0213 10:06:09.312407 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:06:09.312469 kubelet[1873]: E0213 10:06:09.312433 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:06:09.312469 kubelet[1873]: E0213 10:06:09.312467 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:09.312548 kubelet[1873]: E0213 10:06:09.312493 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:06:10.308533 kubelet[1873]: E0213 10:06:10.308461 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:11.309270 kubelet[1873]: E0213 10:06:11.309202 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:12.309935 kubelet[1873]: E0213 10:06:12.309819 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:13.310893 kubelet[1873]: E0213 10:06:13.310815 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:14.285531 env[1473]: time="2024-02-13T10:06:14.285439525Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:06:14.311703 env[1473]: time="2024-02-13T10:06:14.311666409Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:14.311875 kubelet[1873]: E0213 10:06:14.311829 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:06:14.311875 kubelet[1873]: E0213 10:06:14.311852 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:14.311875 kubelet[1873]: E0213 10:06:14.311856 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:06:14.312070 kubelet[1873]: E0213 10:06:14.311880 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:14.312070 kubelet[1873]: E0213 10:06:14.311898 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:06:15.312812 kubelet[1873]: E0213 10:06:15.312685 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:16.313261 kubelet[1873]: E0213 10:06:16.313150 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:17.314120 kubelet[1873]: E0213 10:06:17.314015 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:18.314770 kubelet[1873]: E0213 10:06:18.314703 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:19.285703 env[1473]: time="2024-02-13T10:06:19.285604119Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:06:19.311803 env[1473]: time="2024-02-13T10:06:19.311772310Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:19.311936 kubelet[1873]: E0213 10:06:19.311925 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:06:19.311984 kubelet[1873]: E0213 10:06:19.311952 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:06:19.312022 kubelet[1873]: E0213 10:06:19.311985 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:19.312022 kubelet[1873]: E0213 10:06:19.312019 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:06:19.315194 kubelet[1873]: E0213 10:06:19.315176 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:20.315492 kubelet[1873]: E0213 10:06:20.315406 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:21.285481 env[1473]: time="2024-02-13T10:06:21.285339442Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:06:21.316649 kubelet[1873]: E0213 10:06:21.316543 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:21.340687 env[1473]: time="2024-02-13T10:06:21.340609216Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:21.340866 kubelet[1873]: E0213 10:06:21.340824 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:06:21.340866 kubelet[1873]: E0213 10:06:21.340861 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:06:21.341006 kubelet[1873]: E0213 10:06:21.340903 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:21.341006 kubelet[1873]: E0213 10:06:21.340936 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:06:22.317890 kubelet[1873]: E0213 10:06:22.317780 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:23.318807 kubelet[1873]: E0213 10:06:23.318698 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:24.319719 kubelet[1873]: E0213 10:06:24.319613 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:25.285179 env[1473]: time="2024-02-13T10:06:25.285045592Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:06:25.320304 kubelet[1873]: E0213 10:06:25.320232 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:25.335915 env[1473]: time="2024-02-13T10:06:25.335837317Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:25.336107 kubelet[1873]: E0213 10:06:25.336059 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:06:25.336107 kubelet[1873]: E0213 10:06:25.336098 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:06:25.336245 kubelet[1873]: E0213 10:06:25.336140 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:25.336245 kubelet[1873]: E0213 10:06:25.336173 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:06:26.321447 kubelet[1873]: E0213 10:06:26.321320 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:27.322224 kubelet[1873]: E0213 10:06:27.322120 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:28.322442 kubelet[1873]: E0213 10:06:28.322323 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:29.101216 kubelet[1873]: E0213 10:06:29.101099 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:29.323440 kubelet[1873]: E0213 10:06:29.323378 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:30.324419 kubelet[1873]: E0213 10:06:30.324304 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:31.324991 kubelet[1873]: E0213 10:06:31.324869 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:32.326032 kubelet[1873]: E0213 10:06:32.325923 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:33.326487 kubelet[1873]: E0213 10:06:33.326388 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:34.285514 env[1473]: time="2024-02-13T10:06:34.285348651Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:06:34.314933 env[1473]: time="2024-02-13T10:06:34.314860460Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:34.315087 kubelet[1873]: E0213 10:06:34.315034 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:06:34.315087 kubelet[1873]: E0213 10:06:34.315058 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:06:34.315087 kubelet[1873]: E0213 10:06:34.315078 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:34.315182 kubelet[1873]: E0213 10:06:34.315094 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:06:34.326544 kubelet[1873]: E0213 10:06:34.326498 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:35.284694 env[1473]: time="2024-02-13T10:06:35.284545008Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:06:35.314046 env[1473]: time="2024-02-13T10:06:35.313965476Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:35.314310 kubelet[1873]: E0213 10:06:35.314163 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:06:35.314310 kubelet[1873]: E0213 10:06:35.314202 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:06:35.314310 kubelet[1873]: E0213 10:06:35.314224 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:35.314310 kubelet[1873]: E0213 10:06:35.314240 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:06:35.326627 kubelet[1873]: E0213 10:06:35.326594 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:36.327544 kubelet[1873]: E0213 10:06:36.327420 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:37.327848 kubelet[1873]: E0213 10:06:37.327718 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:38.285513 env[1473]: time="2024-02-13T10:06:38.285359599Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:06:38.314801 env[1473]: time="2024-02-13T10:06:38.314742787Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:38.315019 kubelet[1873]: E0213 10:06:38.314972 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:06:38.315019 kubelet[1873]: E0213 10:06:38.314996 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:06:38.315019 kubelet[1873]: E0213 10:06:38.315017 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:38.315126 kubelet[1873]: E0213 10:06:38.315035 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:06:38.328393 kubelet[1873]: E0213 10:06:38.328351 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:39.329492 kubelet[1873]: E0213 10:06:39.329412 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:40.330722 kubelet[1873]: E0213 10:06:40.330605 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:41.331410 kubelet[1873]: E0213 10:06:41.331271 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:42.331731 kubelet[1873]: E0213 10:06:42.331613 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:43.332743 kubelet[1873]: E0213 10:06:43.332624 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:44.333861 kubelet[1873]: E0213 10:06:44.333788 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:45.334717 kubelet[1873]: E0213 10:06:45.334649 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:46.334910 kubelet[1873]: E0213 10:06:46.334789 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:47.335824 kubelet[1873]: E0213 10:06:47.335707 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:48.285680 env[1473]: time="2024-02-13T10:06:48.285538835Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:06:48.312161 env[1473]: time="2024-02-13T10:06:48.312098570Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:48.312310 kubelet[1873]: E0213 10:06:48.312263 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:06:48.312310 kubelet[1873]: E0213 10:06:48.312290 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:06:48.312410 kubelet[1873]: E0213 10:06:48.312319 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:48.312410 kubelet[1873]: E0213 10:06:48.312345 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:06:48.336642 kubelet[1873]: E0213 10:06:48.336612 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:49.101116 kubelet[1873]: E0213 10:06:49.101051 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:49.285399 env[1473]: time="2024-02-13T10:06:49.285230513Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:06:49.285743 env[1473]: time="2024-02-13T10:06:49.285441425Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:06:49.315018 env[1473]: time="2024-02-13T10:06:49.314983760Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:49.315127 env[1473]: time="2024-02-13T10:06:49.315024624Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:06:49.315193 kubelet[1873]: E0213 10:06:49.315174 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:06:49.315193 kubelet[1873]: E0213 10:06:49.315183 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:06:49.315261 kubelet[1873]: E0213 10:06:49.315212 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:06:49.315261 kubelet[1873]: E0213 10:06:49.315213 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:06:49.315261 kubelet[1873]: E0213 10:06:49.315233 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:49.315261 kubelet[1873]: E0213 10:06:49.315233 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:06:49.315261 kubelet[1873]: E0213 10:06:49.315249 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:06:49.315432 kubelet[1873]: E0213 10:06:49.315250 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:06:49.337034 kubelet[1873]: E0213 10:06:49.336998 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:50.338255 kubelet[1873]: E0213 10:06:50.338137 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:51.338504 kubelet[1873]: E0213 10:06:51.338394 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:52.339008 kubelet[1873]: E0213 10:06:52.338941 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:53.339876 kubelet[1873]: E0213 10:06:53.339769 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:54.340912 kubelet[1873]: E0213 10:06:54.340707 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:55.341832 kubelet[1873]: E0213 10:06:55.341713 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:56.342581 kubelet[1873]: E0213 10:06:56.342414 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:57.343510 kubelet[1873]: E0213 10:06:57.343407 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:58.343689 kubelet[1873]: E0213 10:06:58.343570 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:06:59.344902 kubelet[1873]: E0213 10:06:59.344787 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:00.346045 kubelet[1873]: E0213 10:07:00.345932 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:01.285775 env[1473]: time="2024-02-13T10:07:01.285642273Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:07:01.311462 env[1473]: time="2024-02-13T10:07:01.311395064Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:01.311608 kubelet[1873]: E0213 10:07:01.311562 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:07:01.311608 kubelet[1873]: E0213 10:07:01.311590 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:07:01.311675 kubelet[1873]: E0213 10:07:01.311612 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:01.311675 kubelet[1873]: E0213 10:07:01.311630 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:07:01.346228 kubelet[1873]: E0213 10:07:01.346165 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:02.285326 env[1473]: time="2024-02-13T10:07:02.285196688Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:07:02.311577 env[1473]: time="2024-02-13T10:07:02.311518469Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:02.311842 kubelet[1873]: E0213 10:07:02.311733 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:07:02.311842 kubelet[1873]: E0213 10:07:02.311756 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:07:02.311842 kubelet[1873]: E0213 10:07:02.311791 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:02.311842 kubelet[1873]: E0213 10:07:02.311809 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:07:02.346798 kubelet[1873]: E0213 10:07:02.346745 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:03.347261 kubelet[1873]: E0213 10:07:03.347144 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:04.284574 env[1473]: time="2024-02-13T10:07:04.284440949Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:07:04.310825 env[1473]: time="2024-02-13T10:07:04.310760142Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:04.310919 kubelet[1873]: E0213 10:07:04.310911 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:07:04.310952 kubelet[1873]: E0213 10:07:04.310937 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:07:04.310973 kubelet[1873]: E0213 10:07:04.310957 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:04.311021 kubelet[1873]: E0213 10:07:04.310974 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:07:04.348047 kubelet[1873]: E0213 10:07:04.347937 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:05.348466 kubelet[1873]: E0213 10:07:05.348391 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:06.349277 kubelet[1873]: E0213 10:07:06.349158 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:07.349955 kubelet[1873]: E0213 10:07:07.349846 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:08.350173 kubelet[1873]: E0213 10:07:08.350062 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:09.100935 kubelet[1873]: E0213 10:07:09.100819 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:09.351478 kubelet[1873]: E0213 10:07:09.351226 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:10.351886 kubelet[1873]: E0213 10:07:10.351774 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:11.352834 kubelet[1873]: E0213 10:07:11.352715 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:12.353663 kubelet[1873]: E0213 10:07:12.353550 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:13.285731 env[1473]: time="2024-02-13T10:07:13.285603080Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:07:13.311946 env[1473]: time="2024-02-13T10:07:13.311883341Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:13.312102 kubelet[1873]: E0213 10:07:13.312037 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:07:13.312102 kubelet[1873]: E0213 10:07:13.312060 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:07:13.312102 kubelet[1873]: E0213 10:07:13.312087 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:13.312102 kubelet[1873]: E0213 10:07:13.312104 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:07:13.354851 kubelet[1873]: E0213 10:07:13.354735 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:14.355428 kubelet[1873]: E0213 10:07:14.355308 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:15.285271 env[1473]: time="2024-02-13T10:07:15.285152225Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:07:15.312091 env[1473]: time="2024-02-13T10:07:15.312028775Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:15.312261 kubelet[1873]: E0213 10:07:15.312251 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:07:15.312298 kubelet[1873]: E0213 10:07:15.312276 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:07:15.312298 kubelet[1873]: E0213 10:07:15.312297 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:15.312361 kubelet[1873]: E0213 10:07:15.312314 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:07:15.356320 kubelet[1873]: E0213 10:07:15.356209 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:16.284828 env[1473]: time="2024-02-13T10:07:16.284730653Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:07:16.314235 env[1473]: time="2024-02-13T10:07:16.314179737Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:16.314506 kubelet[1873]: E0213 10:07:16.314409 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:07:16.314506 kubelet[1873]: E0213 10:07:16.314448 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:07:16.314506 kubelet[1873]: E0213 10:07:16.314467 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:16.314506 kubelet[1873]: E0213 10:07:16.314490 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:07:16.357249 kubelet[1873]: E0213 10:07:16.357196 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:17.358104 kubelet[1873]: E0213 10:07:17.357990 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:18.359020 kubelet[1873]: E0213 10:07:18.358902 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:19.359163 kubelet[1873]: E0213 10:07:19.359029 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:20.359952 kubelet[1873]: E0213 10:07:20.359833 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:21.360360 kubelet[1873]: E0213 10:07:21.360243 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:22.361246 kubelet[1873]: E0213 10:07:22.361128 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:23.361754 kubelet[1873]: E0213 10:07:23.361650 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:24.362035 kubelet[1873]: E0213 10:07:24.361931 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:25.362722 kubelet[1873]: E0213 10:07:25.362642 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:26.363677 kubelet[1873]: E0213 10:07:26.363551 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:27.285542 env[1473]: time="2024-02-13T10:07:27.285422258Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:07:27.285542 env[1473]: time="2024-02-13T10:07:27.285405408Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:07:27.312346 env[1473]: time="2024-02-13T10:07:27.312310616Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:27.312346 env[1473]: time="2024-02-13T10:07:27.312326531Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:27.312532 kubelet[1873]: E0213 10:07:27.312503 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:07:27.312594 kubelet[1873]: E0213 10:07:27.312560 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:07:27.312615 kubelet[1873]: E0213 10:07:27.312600 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:27.312675 kubelet[1873]: E0213 10:07:27.312617 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:07:27.312675 kubelet[1873]: E0213 10:07:27.312503 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:07:27.312675 kubelet[1873]: E0213 10:07:27.312630 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:07:27.312675 kubelet[1873]: E0213 10:07:27.312645 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:27.312774 kubelet[1873]: E0213 10:07:27.312657 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:07:27.364137 kubelet[1873]: E0213 10:07:27.364075 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:28.285306 env[1473]: time="2024-02-13T10:07:28.285183825Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:07:28.315015 env[1473]: time="2024-02-13T10:07:28.314982023Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:28.315278 kubelet[1873]: E0213 10:07:28.315207 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:07:28.315278 kubelet[1873]: E0213 10:07:28.315233 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:07:28.315278 kubelet[1873]: E0213 10:07:28.315257 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:28.315278 kubelet[1873]: E0213 10:07:28.315274 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:07:28.365137 kubelet[1873]: E0213 10:07:28.365080 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:29.100681 kubelet[1873]: E0213 10:07:29.100579 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:29.366225 kubelet[1873]: E0213 10:07:29.366005 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:30.366728 kubelet[1873]: E0213 10:07:30.366608 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:31.367981 kubelet[1873]: E0213 10:07:31.367862 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:32.368513 kubelet[1873]: E0213 10:07:32.368411 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:33.369095 kubelet[1873]: E0213 10:07:33.368979 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:34.370107 kubelet[1873]: E0213 10:07:34.369984 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:35.371033 kubelet[1873]: E0213 10:07:35.370928 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:36.371244 kubelet[1873]: E0213 10:07:36.371133 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:37.371456 kubelet[1873]: E0213 10:07:37.371393 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:38.284578 env[1473]: time="2024-02-13T10:07:38.284432546Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:07:38.311242 env[1473]: time="2024-02-13T10:07:38.311156678Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:38.311360 kubelet[1873]: E0213 10:07:38.311334 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:07:38.311416 kubelet[1873]: E0213 10:07:38.311361 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:07:38.311416 kubelet[1873]: E0213 10:07:38.311387 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:38.311511 kubelet[1873]: E0213 10:07:38.311439 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:07:38.371582 kubelet[1873]: E0213 10:07:38.371522 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:39.285664 env[1473]: time="2024-02-13T10:07:39.285539786Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:07:39.337961 env[1473]: time="2024-02-13T10:07:39.337865597Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:39.338138 kubelet[1873]: E0213 10:07:39.338113 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:07:39.338222 kubelet[1873]: E0213 10:07:39.338157 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:07:39.338222 kubelet[1873]: E0213 10:07:39.338203 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:39.338359 kubelet[1873]: E0213 10:07:39.338235 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:07:39.371718 kubelet[1873]: E0213 10:07:39.371648 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:40.372819 kubelet[1873]: E0213 10:07:40.372700 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:41.285177 env[1473]: time="2024-02-13T10:07:41.285038362Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:07:41.315182 env[1473]: time="2024-02-13T10:07:41.315150733Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:41.315332 kubelet[1873]: E0213 10:07:41.315323 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:07:41.315378 kubelet[1873]: E0213 10:07:41.315349 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:07:41.315378 kubelet[1873]: E0213 10:07:41.315376 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:41.315486 kubelet[1873]: E0213 10:07:41.315396 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:07:41.373478 kubelet[1873]: E0213 10:07:41.373426 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:42.374643 kubelet[1873]: E0213 10:07:42.374563 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:43.374901 kubelet[1873]: E0213 10:07:43.374773 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:44.375118 kubelet[1873]: E0213 10:07:44.374994 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:45.375647 kubelet[1873]: E0213 10:07:45.375523 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:46.376258 kubelet[1873]: E0213 10:07:46.376148 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:47.377329 kubelet[1873]: E0213 10:07:47.377224 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:48.378342 kubelet[1873]: E0213 10:07:48.378222 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:49.101244 kubelet[1873]: E0213 10:07:49.101131 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:49.379482 kubelet[1873]: E0213 10:07:49.379225 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:50.379739 kubelet[1873]: E0213 10:07:50.379622 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:51.380958 kubelet[1873]: E0213 10:07:51.380843 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:52.381395 kubelet[1873]: E0213 10:07:52.381266 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:53.284654 env[1473]: time="2024-02-13T10:07:53.284529956Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:07:53.314545 env[1473]: time="2024-02-13T10:07:53.314462719Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:53.314675 kubelet[1873]: E0213 10:07:53.314664 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:07:53.314725 kubelet[1873]: E0213 10:07:53.314702 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:07:53.314725 kubelet[1873]: E0213 10:07:53.314722 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:53.314784 kubelet[1873]: E0213 10:07:53.314739 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:07:53.381613 kubelet[1873]: E0213 10:07:53.381507 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:54.285594 env[1473]: time="2024-02-13T10:07:54.285465840Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:07:54.312706 env[1473]: time="2024-02-13T10:07:54.312605788Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:54.312899 kubelet[1873]: E0213 10:07:54.312808 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:07:54.312899 kubelet[1873]: E0213 10:07:54.312870 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:07:54.312899 kubelet[1873]: E0213 10:07:54.312889 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:54.312986 kubelet[1873]: E0213 10:07:54.312904 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:07:54.382164 kubelet[1873]: E0213 10:07:54.382052 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:55.284781 env[1473]: time="2024-02-13T10:07:55.284691997Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:07:55.298699 env[1473]: time="2024-02-13T10:07:55.298623051Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:07:55.298909 kubelet[1873]: E0213 10:07:55.298760 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:07:55.298909 kubelet[1873]: E0213 10:07:55.298784 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:07:55.298909 kubelet[1873]: E0213 10:07:55.298808 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:07:55.298909 kubelet[1873]: E0213 10:07:55.298827 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:07:55.383214 kubelet[1873]: E0213 10:07:55.383098 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:56.383448 kubelet[1873]: E0213 10:07:56.383340 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:57.384534 kubelet[1873]: E0213 10:07:57.384431 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:58.385769 kubelet[1873]: E0213 10:07:58.385648 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:07:59.386521 kubelet[1873]: E0213 10:07:59.386443 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:00.387802 kubelet[1873]: E0213 10:08:00.387679 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:01.388529 kubelet[1873]: E0213 10:08:01.388421 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:02.389227 kubelet[1873]: E0213 10:08:02.389111 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:03.390220 kubelet[1873]: E0213 10:08:03.390104 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:04.285412 env[1473]: time="2024-02-13T10:08:04.285260814Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:08:04.314775 env[1473]: time="2024-02-13T10:08:04.314739130Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:08:04.314915 kubelet[1873]: E0213 10:08:04.314902 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:08:04.314993 kubelet[1873]: E0213 10:08:04.314928 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:08:04.314993 kubelet[1873]: E0213 10:08:04.314949 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:08:04.314993 kubelet[1873]: E0213 10:08:04.314972 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:08:04.390614 kubelet[1873]: E0213 10:08:04.390495 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:05.391120 kubelet[1873]: E0213 10:08:05.391015 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:06.285210 env[1473]: time="2024-02-13T10:08:06.285079906Z" level=info msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\"" Feb 13 10:08:06.311329 env[1473]: time="2024-02-13T10:08:06.311271576Z" level=error msg="StopPodSandbox for \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\" failed" error="failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:08:06.311462 kubelet[1873]: E0213 10:08:06.311442 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af" Feb 13 10:08:06.311494 kubelet[1873]: E0213 10:08:06.311466 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af} Feb 13 10:08:06.311494 kubelet[1873]: E0213 10:08:06.311487 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:08:06.311562 kubelet[1873]: E0213 10:08:06.311504 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da6a3b0d-4f2e-49b1-a2b6-346cad162ffb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ac1b3a6398052cc1245173a6abcd93e2989a0a4b2aa7c1d72dbf1ca0e666af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-lzm4g" podUID=da6a3b0d-4f2e-49b1-a2b6-346cad162ffb Feb 13 10:08:06.391895 kubelet[1873]: E0213 10:08:06.391830 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:07.284838 env[1473]: time="2024-02-13T10:08:07.284707878Z" level=info msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\"" Feb 13 10:08:07.299287 env[1473]: time="2024-02-13T10:08:07.299226040Z" level=error msg="StopPodSandbox for \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\" failed" error="failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:08:07.299506 kubelet[1873]: E0213 10:08:07.299365 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534" Feb 13 10:08:07.299506 kubelet[1873]: E0213 10:08:07.299397 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534} Feb 13 10:08:07.299506 kubelet[1873]: E0213 10:08:07.299419 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:08:07.299506 kubelet[1873]: E0213 10:08:07.299437 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d6d9af-5bd0-4d52-a244-b2ec483822b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a51424302062a2943720a6f0bc177ba27d92e875020e1a730a7f36c24386534\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-284zz" podUID=15d6d9af-5bd0-4d52-a244-b2ec483822b5 Feb 13 10:08:07.392730 kubelet[1873]: E0213 10:08:07.392666 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:08.393516 kubelet[1873]: E0213 10:08:08.393349 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:09.100861 kubelet[1873]: E0213 10:08:09.100753 1873 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:09.394557 kubelet[1873]: E0213 10:08:09.394360 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:10.395773 kubelet[1873]: E0213 10:08:10.395657 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:11.396145 kubelet[1873]: E0213 10:08:11.396081 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:12.397352 kubelet[1873]: E0213 10:08:12.397229 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:13.397859 kubelet[1873]: E0213 10:08:13.397750 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:14.398836 kubelet[1873]: E0213 10:08:14.398715 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 10:08:14.819878 update_engine[1465]: I0213 10:08:14.819767 1465 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 10:08:14.819878 update_engine[1465]: I0213 10:08:14.819845 1465 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 10:08:14.824013 update_engine[1465]: I0213 10:08:14.823936 1465 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 10:08:14.824871 update_engine[1465]: I0213 10:08:14.824796 1465 omaha_request_params.cc:62] Current group set to lts Feb 13 10:08:14.825240 update_engine[1465]: I0213 10:08:14.825173 1465 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 10:08:14.825240 update_engine[1465]: I0213 10:08:14.825193 1465 update_attempter.cc:643] Scheduling an action processor start. Feb 13 10:08:14.825240 update_engine[1465]: I0213 10:08:14.825225 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 10:08:14.825585 update_engine[1465]: I0213 10:08:14.825294 1465 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 10:08:14.825585 update_engine[1465]: I0213 10:08:14.825454 1465 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 10:08:14.825585 update_engine[1465]: I0213 10:08:14.825474 1465 omaha_request_action.cc:271] Request: Feb 13 10:08:14.825585 update_engine[1465]: Feb 13 10:08:14.825585 update_engine[1465]: Feb 13 10:08:14.825585 update_engine[1465]: Feb 13 10:08:14.825585 update_engine[1465]: Feb 13 10:08:14.825585 update_engine[1465]: Feb 13 10:08:14.825585 update_engine[1465]: Feb 13 10:08:14.825585 update_engine[1465]: Feb 13 10:08:14.825585 update_engine[1465]: Feb 13 10:08:14.825585 update_engine[1465]: I0213 10:08:14.825484 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 10:08:14.826753 locksmithd[1510]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 10:08:14.828637 update_engine[1465]: I0213 10:08:14.828547 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 10:08:14.828848 update_engine[1465]: E0213 10:08:14.828775 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 10:08:14.828969 update_engine[1465]: I0213 10:08:14.828933 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 10:08:15.284716 env[1473]: time="2024-02-13T10:08:15.284587497Z" level=info msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\"" Feb 13 10:08:15.311957 env[1473]: time="2024-02-13T10:08:15.311894583Z" level=error msg="StopPodSandbox for \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\" failed" error="failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 10:08:15.312102 kubelet[1873]: E0213 10:08:15.312063 1873 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f" Feb 13 10:08:15.312102 kubelet[1873]: E0213 10:08:15.312087 1873 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f} Feb 13 10:08:15.312158 kubelet[1873]: E0213 10:08:15.312108 1873 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 10:08:15.312158 kubelet[1873]: E0213 10:08:15.312125 1873 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36932f06-7df4-41dc-9f83-abe5596dbe2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dbe1159ce2b0d578f4b56dc9fdf94124ae085b9489e3878d3b868228a78dc5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=36932f06-7df4-41dc-9f83-abe5596dbe2f Feb 13 10:08:15.399058 kubelet[1873]: E0213 10:08:15.398956 1873 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"