Feb 13 05:11:30.565788 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 13 05:11:30.565802 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 05:11:30.565808 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 05:11:30.565812 kernel: BIOS-provided physical RAM map: Feb 13 05:11:30.565816 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 05:11:30.565820 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 05:11:30.565824 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 05:11:30.565829 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 05:11:30.565833 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 05:11:30.565837 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfbbfff] usable Feb 13 05:11:30.565840 kernel: BIOS-e820: [mem 0x000000006dfbc000-0x000000006dfbcfff] ACPI NVS Feb 13 05:11:30.565844 kernel: BIOS-e820: [mem 0x000000006dfbd000-0x000000006dfbdfff] reserved Feb 13 05:11:30.565848 kernel: BIOS-e820: [mem 0x000000006dfbe000-0x0000000077fc4fff] usable Feb 13 05:11:30.565852 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Feb 13 05:11:30.565858 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Feb 13 05:11:30.565862 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Feb 13 05:11:30.565866 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Feb 13 05:11:30.565870 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Feb 13 05:11:30.565874 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Feb 13 05:11:30.565879 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 05:11:30.565883 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 05:11:30.565887 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 05:11:30.565891 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 05:11:30.565896 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 05:11:30.565900 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Feb 13 05:11:30.565904 kernel: NX (Execute Disable) protection: active Feb 13 05:11:30.565908 kernel: SMBIOS 3.2.1 present. Feb 13 05:11:30.565913 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 13 05:11:30.565917 kernel: tsc: Detected 3400.000 MHz processor Feb 13 05:11:30.565921 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 05:11:30.565925 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 05:11:30.565930 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 05:11:30.565934 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Feb 13 05:11:30.565939 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 05:11:30.565944 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Feb 13 05:11:30.565948 kernel: Using GB pages for direct mapping Feb 13 05:11:30.565953 kernel: ACPI: Early table checksum verification disabled Feb 13 05:11:30.565957 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 05:11:30.565961 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 05:11:30.565966 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Feb 13 05:11:30.565972 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 05:11:30.565977 kernel: ACPI: FACS 0x0000000079662F80 000040 Feb 13 05:11:30.565982 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Feb 13 05:11:30.565987 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Feb 13 05:11:30.565991 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 05:11:30.565996 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 05:11:30.566001 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 05:11:30.566006 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 05:11:30.566011 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 05:11:30.566016 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 05:11:30.566021 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:11:30.566025 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 05:11:30.566030 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 05:11:30.566035 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:11:30.566039 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:11:30.566044 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 05:11:30.566050 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 05:11:30.566055 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:11:30.566059 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 05:11:30.566064 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 05:11:30.566069 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 13 05:11:30.566073 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 05:11:30.566078 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 05:11:30.566083 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 05:11:30.566088 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 \xf5m 01072009 AMI 00010013) Feb 13 05:11:30.566093 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 05:11:30.566098 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 05:11:30.566103 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 05:11:30.566107 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 05:11:30.566112 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 05:11:30.566117 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Feb 13 05:11:30.566122 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Feb 13 05:11:30.566126 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Feb 13 05:11:30.566132 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Feb 13 05:11:30.566136 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Feb 13 05:11:30.566141 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Feb 13 05:11:30.566146 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Feb 13 05:11:30.566150 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Feb 13 05:11:30.566155 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Feb 13 05:11:30.566159 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Feb 13 05:11:30.566164 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Feb 13 05:11:30.566169 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Feb 13 05:11:30.566174 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Feb 13 05:11:30.566179 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Feb 13 05:11:30.566183 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Feb 13 05:11:30.566188 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Feb 13 05:11:30.566193 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Feb 13 05:11:30.566197 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Feb 13 05:11:30.566202 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Feb 13 05:11:30.566207 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Feb 13 05:11:30.566211 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Feb 13 05:11:30.566217 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Feb 13 05:11:30.566222 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Feb 13 05:11:30.566226 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Feb 13 05:11:30.566231 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Feb 13 05:11:30.566235 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Feb 13 05:11:30.566240 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Feb 13 05:11:30.566245 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Feb 13 05:11:30.566249 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Feb 13 05:11:30.566254 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Feb 13 05:11:30.566259 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Feb 13 05:11:30.566264 kernel: No NUMA configuration found Feb 13 05:11:30.566269 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Feb 13 05:11:30.566274 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Feb 13 05:11:30.566278 kernel: Zone ranges: Feb 13 05:11:30.566283 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 05:11:30.566288 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 05:11:30.566292 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Feb 13 05:11:30.566297 kernel: Movable zone start for each node Feb 13 05:11:30.566302 kernel: Early memory node ranges Feb 13 05:11:30.566307 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 05:11:30.566312 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 05:11:30.566316 kernel: node 0: [mem 0x0000000040400000-0x000000006dfbbfff] Feb 13 05:11:30.566321 kernel: node 0: [mem 0x000000006dfbe000-0x0000000077fc4fff] Feb 13 05:11:30.566326 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Feb 13 05:11:30.566333 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Feb 13 05:11:30.566339 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Feb 13 05:11:30.566344 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Feb 13 05:11:30.566353 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 05:11:30.566358 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 05:11:30.566363 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 05:11:30.566369 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 05:11:30.566374 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 13 05:11:30.566379 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 13 05:11:30.566384 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Feb 13 05:11:30.566389 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Feb 13 05:11:30.566395 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 05:11:30.566400 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 05:11:30.566405 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 05:11:30.566410 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 05:11:30.566415 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 05:11:30.566420 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 05:11:30.566425 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 05:11:30.566430 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 05:11:30.566434 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 05:11:30.566439 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 05:11:30.566445 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 05:11:30.566450 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 05:11:30.566455 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 05:11:30.566460 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 05:11:30.566465 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 05:11:30.566470 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 05:11:30.566475 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 05:11:30.566480 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 05:11:30.566485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 05:11:30.566491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 05:11:30.566496 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 05:11:30.566501 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 05:11:30.566506 kernel: TSC deadline timer available Feb 13 05:11:30.566511 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 05:11:30.566516 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Feb 13 05:11:30.566521 kernel: Booting paravirtualized kernel on bare hardware Feb 13 05:11:30.566526 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 05:11:30.566532 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 05:11:30.566537 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 05:11:30.566542 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 05:11:30.566547 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 05:11:30.566551 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222327 Feb 13 05:11:30.566557 kernel: Policy zone: Normal Feb 13 05:11:30.566562 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 05:11:30.566567 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 05:11:30.566573 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 05:11:30.566578 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 05:11:30.566583 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 05:11:30.566588 kernel: Memory: 32683728K/33411988K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 05:11:30.566594 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 05:11:30.566599 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 05:11:30.566604 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 05:11:30.566609 kernel: rcu: Hierarchical RCU implementation. Feb 13 05:11:30.566614 kernel: rcu: RCU event tracing is enabled. Feb 13 05:11:30.566620 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 05:11:30.566625 kernel: Rude variant of Tasks RCU enabled. Feb 13 05:11:30.566630 kernel: Tracing variant of Tasks RCU enabled. Feb 13 05:11:30.566635 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 05:11:30.566640 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 05:11:30.566645 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 05:11:30.566650 kernel: random: crng init done Feb 13 05:11:30.566655 kernel: Console: colour dummy device 80x25 Feb 13 05:11:30.566660 kernel: printk: console [tty0] enabled Feb 13 05:11:30.566666 kernel: printk: console [ttyS1] enabled Feb 13 05:11:30.566671 kernel: ACPI: Core revision 20210730 Feb 13 05:11:30.566676 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 13 05:11:30.566681 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 05:11:30.566686 kernel: DMAR: Host address width 39 Feb 13 05:11:30.566691 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 13 05:11:30.566696 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 13 05:11:30.566701 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 05:11:30.566706 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 05:11:30.566712 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Feb 13 05:11:30.566717 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Feb 13 05:11:30.566722 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 13 05:11:30.566727 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 05:11:30.566732 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 05:11:30.566737 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 05:11:30.566742 kernel: x2apic enabled Feb 13 05:11:30.566747 kernel: Switched APIC routing to cluster x2apic. Feb 13 05:11:30.566752 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 05:11:30.566757 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 05:11:30.566763 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 05:11:30.566768 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 05:11:30.566773 kernel: process: using mwait in idle threads Feb 13 05:11:30.566778 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 05:11:30.566783 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 05:11:30.566788 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 05:11:30.566793 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 05:11:30.566798 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 05:11:30.566804 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 05:11:30.566809 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 05:11:30.566814 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 05:11:30.566819 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 05:11:30.566824 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 05:11:30.566829 kernel: TAA: Mitigation: TSX disabled Feb 13 05:11:30.566834 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 05:11:30.566839 kernel: SRBDS: Mitigation: Microcode Feb 13 05:11:30.566844 kernel: GDS: Vulnerable: No microcode Feb 13 05:11:30.566850 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 05:11:30.566855 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 05:11:30.566860 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 05:11:30.566865 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 05:11:30.566870 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 05:11:30.566874 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 05:11:30.566879 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 05:11:30.566884 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 05:11:30.566889 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 05:11:30.566895 kernel: Freeing SMP alternatives memory: 32K Feb 13 05:11:30.566900 kernel: pid_max: default: 32768 minimum: 301 Feb 13 05:11:30.566905 kernel: LSM: Security Framework initializing Feb 13 05:11:30.566910 kernel: SELinux: Initializing. Feb 13 05:11:30.566915 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 05:11:30.566920 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 05:11:30.566925 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 05:11:30.566930 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 05:11:30.566935 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 05:11:30.566941 kernel: ... version: 4 Feb 13 05:11:30.566946 kernel: ... bit width: 48 Feb 13 05:11:30.566951 kernel: ... generic registers: 4 Feb 13 05:11:30.566956 kernel: ... value mask: 0000ffffffffffff Feb 13 05:11:30.566961 kernel: ... max period: 00007fffffffffff Feb 13 05:11:30.566966 kernel: ... fixed-purpose events: 3 Feb 13 05:11:30.566971 kernel: ... event mask: 000000070000000f Feb 13 05:11:30.566976 kernel: signal: max sigframe size: 2032 Feb 13 05:11:30.566981 kernel: rcu: Hierarchical SRCU implementation. Feb 13 05:11:30.566986 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 05:11:30.566991 kernel: smp: Bringing up secondary CPUs ... Feb 13 05:11:30.566997 kernel: x86: Booting SMP configuration: Feb 13 05:11:30.567002 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 05:11:30.567007 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 05:11:30.567012 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 05:11:30.567017 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 05:11:30.567022 kernel: smpboot: Max logical packages: 1 Feb 13 05:11:30.567027 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 05:11:30.567032 kernel: devtmpfs: initialized Feb 13 05:11:30.567037 kernel: x86/mm: Memory block size: 128MB Feb 13 05:11:30.567042 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfbc000-0x6dfbcfff] (4096 bytes) Feb 13 05:11:30.567047 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Feb 13 05:11:30.567052 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 05:11:30.567058 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 05:11:30.567063 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 05:11:30.567068 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 05:11:30.567073 kernel: audit: initializing netlink subsys (disabled) Feb 13 05:11:30.567078 kernel: audit: type=2000 audit(1707801085.120:1): state=initialized audit_enabled=0 res=1 Feb 13 05:11:30.567083 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 05:11:30.567088 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 05:11:30.567093 kernel: cpuidle: using governor menu Feb 13 05:11:30.567098 kernel: ACPI: bus type PCI registered Feb 13 05:11:30.567103 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 05:11:30.567109 kernel: dca service started, version 1.12.1 Feb 13 05:11:30.567113 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 05:11:30.567119 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 05:11:30.567124 kernel: PCI: Using configuration type 1 for base access Feb 13 05:11:30.567129 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 05:11:30.567134 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 05:11:30.567139 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 05:11:30.567144 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 05:11:30.567149 kernel: ACPI: Added _OSI(Module Device) Feb 13 05:11:30.567154 kernel: ACPI: Added _OSI(Processor Device) Feb 13 05:11:30.567159 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 05:11:30.567165 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 05:11:30.567170 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 05:11:30.567175 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 05:11:30.567180 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 05:11:30.567184 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 05:11:30.567189 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:11:30.567194 kernel: ACPI: SSDT 0xFFFF8E2040214300 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 05:11:30.567199 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 05:11:30.567204 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:11:30.567209 kernel: ACPI: SSDT 0xFFFF8E2041CEAC00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 05:11:30.567215 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:11:30.567220 kernel: ACPI: SSDT 0xFFFF8E2041C5D800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 05:11:30.567225 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:11:30.567230 kernel: ACPI: SSDT 0xFFFF8E2041C5C800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 05:11:30.567235 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:11:30.567240 kernel: ACPI: SSDT 0xFFFF8E204014F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 05:11:30.567245 kernel: ACPI: Dynamic OEM Table Load: Feb 13 05:11:30.567250 kernel: ACPI: SSDT 0xFFFF8E2041CE9400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 05:11:30.567255 kernel: ACPI: Interpreter enabled Feb 13 05:11:30.567260 kernel: ACPI: PM: (supports S0 S5) Feb 13 05:11:30.567265 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 05:11:30.567270 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 05:11:30.567275 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 05:11:30.567280 kernel: HEST: Table parsing has been initialized. Feb 13 05:11:30.567285 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 05:11:30.567290 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 05:11:30.567295 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 05:11:30.567300 kernel: ACPI: PM: Power Resource [USBC] Feb 13 05:11:30.567306 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 05:11:30.567311 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 05:11:30.567316 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 05:11:30.567321 kernel: ACPI: PM: Power Resource [WRST] Feb 13 05:11:30.567326 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 13 05:11:30.567333 kernel: ACPI: PM: Power Resource [FN00] Feb 13 05:11:30.567339 kernel: ACPI: PM: Power Resource [FN01] Feb 13 05:11:30.567344 kernel: ACPI: PM: Power Resource [FN02] Feb 13 05:11:30.567348 kernel: ACPI: PM: Power Resource [FN03] Feb 13 05:11:30.567354 kernel: ACPI: PM: Power Resource [FN04] Feb 13 05:11:30.567359 kernel: ACPI: PM: Power Resource [PIN] Feb 13 05:11:30.567364 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 05:11:30.567429 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 05:11:30.567475 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 05:11:30.567515 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 05:11:30.567522 kernel: PCI host bridge to bus 0000:00 Feb 13 05:11:30.567565 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 05:11:30.567605 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 05:11:30.567641 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 05:11:30.567679 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Feb 13 05:11:30.567715 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 05:11:30.567751 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 05:11:30.567801 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 05:11:30.567854 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 05:11:30.567898 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 05:11:30.567944 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 13 05:11:30.567987 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 13 05:11:30.568032 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 13 05:11:30.568076 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Feb 13 05:11:30.568121 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 13 05:11:30.568163 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 13 05:11:30.568208 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 05:11:30.568250 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Feb 13 05:11:30.568296 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 05:11:30.568343 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Feb 13 05:11:30.568400 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 05:11:30.568453 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Feb 13 05:11:30.568505 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 05:11:30.568564 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 05:11:30.568617 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Feb 13 05:11:30.568667 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Feb 13 05:11:30.568721 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 05:11:30.568775 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 05:11:30.568829 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 05:11:30.568880 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 05:11:30.568934 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 05:11:30.568986 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Feb 13 05:11:30.569045 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 05:11:30.569102 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 05:11:30.569154 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Feb 13 05:11:30.569204 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 05:11:30.569259 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 05:11:30.569311 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Feb 13 05:11:30.569366 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 05:11:30.569421 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 05:11:30.569476 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Feb 13 05:11:30.569528 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Feb 13 05:11:30.569577 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 13 05:11:30.569619 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 13 05:11:30.569660 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 13 05:11:30.569703 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Feb 13 05:11:30.569743 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 05:11:30.569795 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 05:11:30.569837 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 05:11:30.569883 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 05:11:30.569928 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 05:11:30.569973 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 05:11:30.570016 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 05:11:30.570061 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 05:11:30.570103 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 05:11:30.570149 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 13 05:11:30.570191 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 13 05:11:30.570238 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 05:11:30.570280 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 05:11:30.570328 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 05:11:30.570384 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 05:11:30.570427 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Feb 13 05:11:30.570468 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 05:11:30.570515 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 05:11:30.570556 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 05:11:30.570598 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 05:11:30.570646 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 05:11:30.570690 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 05:11:30.570735 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Feb 13 05:11:30.570777 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 13 05:11:30.570823 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 05:11:30.570867 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 05:11:30.570915 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 05:11:30.570958 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 05:11:30.571004 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Feb 13 05:11:30.571046 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 13 05:11:30.571090 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 05:11:30.571135 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 05:11:30.571179 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 05:11:30.571220 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 13 05:11:30.571262 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 05:11:30.571304 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 05:11:30.571355 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 05:11:30.571400 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Feb 13 05:11:30.571446 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 05:11:30.571489 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Feb 13 05:11:30.571532 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 05:11:30.571574 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 05:11:30.571616 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 05:11:30.571657 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 13 05:11:30.571708 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 13 05:11:30.571752 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Feb 13 05:11:30.571798 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 05:11:30.571841 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Feb 13 05:11:30.571885 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 13 05:11:30.571926 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 05:11:30.571969 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 05:11:30.572010 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 13 05:11:30.572052 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 05:11:30.572098 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 05:11:30.572176 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 13 05:11:30.572239 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 05:11:30.572282 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 05:11:30.572325 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 05:11:30.572374 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 05:11:30.572417 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 13 05:11:30.572464 kernel: pci_bus 0000:08: extended config space not accessible Feb 13 05:11:30.572518 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 05:11:30.572564 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Feb 13 05:11:30.572609 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Feb 13 05:11:30.572654 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 05:11:30.572700 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 05:11:30.572744 kernel: pci 0000:08:00.0: supports D1 D2 Feb 13 05:11:30.572790 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 05:11:30.572836 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 05:11:30.572880 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 05:11:30.572923 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 13 05:11:30.572931 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 05:11:30.572937 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 05:11:30.572942 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 05:11:30.572947 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 05:11:30.572952 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 05:11:30.572959 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 05:11:30.572964 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 05:11:30.572970 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 05:11:30.572975 kernel: iommu: Default domain type: Translated Feb 13 05:11:30.572980 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 05:11:30.573026 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 13 05:11:30.573070 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 05:11:30.573116 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 13 05:11:30.573124 kernel: vgaarb: loaded Feb 13 05:11:30.573131 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 05:11:30.573136 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 05:11:30.573142 kernel: PTP clock support registered Feb 13 05:11:30.573147 kernel: PCI: Using ACPI for IRQ routing Feb 13 05:11:30.573152 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 05:11:30.573157 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 05:11:30.573163 kernel: e820: reserve RAM buffer [mem 0x6dfbc000-0x6fffffff] Feb 13 05:11:30.573168 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Feb 13 05:11:30.573173 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Feb 13 05:11:30.573179 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Feb 13 05:11:30.573184 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Feb 13 05:11:30.573190 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 05:11:30.573195 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 13 05:11:30.573200 kernel: clocksource: Switched to clocksource tsc-early Feb 13 05:11:30.573206 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 05:11:30.573211 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 05:11:30.573216 kernel: pnp: PnP ACPI init Feb 13 05:11:30.573259 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 05:11:30.573305 kernel: pnp 00:02: [dma 0 disabled] Feb 13 05:11:30.573353 kernel: pnp 00:03: [dma 0 disabled] Feb 13 05:11:30.573404 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 05:11:30.573452 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 05:11:30.573502 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 05:11:30.573552 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 05:11:30.573602 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 05:11:30.573649 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 05:11:30.573697 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 05:11:30.573742 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 05:11:30.573790 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 05:11:30.573836 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 05:11:30.573884 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 05:11:30.573935 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 05:11:30.573983 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 05:11:30.574029 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 05:11:30.574075 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 05:11:30.574121 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 05:11:30.574167 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 05:11:30.574214 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 05:11:30.574266 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 05:11:30.574276 kernel: pnp: PnP ACPI: found 10 devices Feb 13 05:11:30.574283 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 05:11:30.574290 kernel: NET: Registered PF_INET protocol family Feb 13 05:11:30.574296 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 05:11:30.574303 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 05:11:30.574310 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 05:11:30.574317 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 05:11:30.574325 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 05:11:30.574334 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 05:11:30.574341 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 05:11:30.574347 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 05:11:30.574354 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 05:11:30.574360 kernel: NET: Registered PF_XDP protocol family Feb 13 05:11:30.574414 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Feb 13 05:11:30.574467 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Feb 13 05:11:30.574521 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Feb 13 05:11:30.574573 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 05:11:30.574628 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 05:11:30.574683 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 05:11:30.574739 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 05:11:30.574796 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 05:11:30.574848 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 05:11:30.574901 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 13 05:11:30.574954 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 05:11:30.575007 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 05:11:30.575059 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 05:11:30.575112 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 05:11:30.575164 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 13 05:11:30.575218 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 05:11:30.575271 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 05:11:30.575322 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 13 05:11:30.575380 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 05:11:30.575434 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 05:11:30.575488 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 05:11:30.575541 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 13 05:11:30.575594 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 05:11:30.575645 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 05:11:30.575699 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 13 05:11:30.575748 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 05:11:30.575794 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 05:11:30.575841 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 05:11:30.575886 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 05:11:30.575932 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Feb 13 05:11:30.575978 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 05:11:30.576032 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Feb 13 05:11:30.576083 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 05:11:30.576136 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 13 05:11:30.576185 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Feb 13 05:11:30.576238 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 05:11:30.576287 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Feb 13 05:11:30.576343 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 05:11:30.576394 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Feb 13 05:11:30.576445 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 13 05:11:30.576495 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Feb 13 05:11:30.576505 kernel: PCI: CLS 64 bytes, default 64 Feb 13 05:11:30.576512 kernel: DMAR: No ATSR found Feb 13 05:11:30.576519 kernel: DMAR: No SATC found Feb 13 05:11:30.576525 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 13 05:11:30.576532 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 13 05:11:30.576540 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 13 05:11:30.576547 kernel: DMAR: IOMMU feature pasid inconsistent Feb 13 05:11:30.576553 kernel: DMAR: IOMMU feature eafs inconsistent Feb 13 05:11:30.576560 kernel: DMAR: IOMMU feature prs inconsistent Feb 13 05:11:30.576566 kernel: DMAR: IOMMU feature nest inconsistent Feb 13 05:11:30.576573 kernel: DMAR: IOMMU feature mts inconsistent Feb 13 05:11:30.576579 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 13 05:11:30.576586 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 13 05:11:30.576593 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 05:11:30.576600 kernel: DMAR: dmar1: Using Queued invalidation Feb 13 05:11:30.576653 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 05:11:30.576705 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 05:11:30.576758 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 13 05:11:30.576810 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 13 05:11:30.576862 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 13 05:11:30.576913 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 13 05:11:30.576964 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 13 05:11:30.577018 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 13 05:11:30.577070 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 13 05:11:30.577121 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 13 05:11:30.577172 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 13 05:11:30.577224 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 13 05:11:30.577275 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 13 05:11:30.577326 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 13 05:11:30.577385 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 13 05:11:30.577439 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 13 05:11:30.577492 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 13 05:11:30.577541 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 13 05:11:30.577583 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 13 05:11:30.577624 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 13 05:11:30.577666 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 13 05:11:30.577708 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 13 05:11:30.577749 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 13 05:11:30.577795 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 13 05:11:30.577838 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 13 05:11:30.577882 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 05:11:30.577925 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 13 05:11:30.577969 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 13 05:11:30.578015 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 13 05:11:30.578023 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 05:11:30.578028 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 05:11:30.578035 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Feb 13 05:11:30.578040 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 13 05:11:30.578046 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 05:11:30.578051 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 05:11:30.578056 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 05:11:30.578062 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 13 05:11:30.578109 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 05:11:30.578118 kernel: Initialise system trusted keyrings Feb 13 05:11:30.578124 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 05:11:30.578130 kernel: Key type asymmetric registered Feb 13 05:11:30.578135 kernel: Asymmetric key parser 'x509' registered Feb 13 05:11:30.578140 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 05:11:30.578146 kernel: io scheduler mq-deadline registered Feb 13 05:11:30.578151 kernel: io scheduler kyber registered Feb 13 05:11:30.578156 kernel: io scheduler bfq registered Feb 13 05:11:30.578198 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 13 05:11:30.578241 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 13 05:11:30.578285 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 13 05:11:30.578327 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 13 05:11:30.578374 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 13 05:11:30.578417 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 13 05:11:30.578459 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 13 05:11:30.578505 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 05:11:30.578513 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 05:11:30.578520 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 05:11:30.578525 kernel: pstore: Registered erst as persistent store backend Feb 13 05:11:30.578531 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 05:11:30.578536 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 05:11:30.578541 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 05:11:30.578547 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 05:11:30.578589 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 05:11:30.578598 kernel: i8042: PNP: No PS/2 controller found. Feb 13 05:11:30.578637 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 05:11:30.578676 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 05:11:30.578714 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T05:11:29 UTC (1707801089) Feb 13 05:11:30.578751 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 05:11:30.578758 kernel: fail to initialize ptp_kvm Feb 13 05:11:30.578764 kernel: intel_pstate: Intel P-state driver initializing Feb 13 05:11:30.578769 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 05:11:30.578775 kernel: intel_pstate: HWP enabled Feb 13 05:11:30.578781 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 05:11:30.578787 kernel: vesafb: scrolling: redraw Feb 13 05:11:30.578792 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 05:11:30.578797 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x0000000012ceb16c, using 768k, total 768k Feb 13 05:11:30.578803 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 05:11:30.578808 kernel: fb0: VESA VGA frame buffer device Feb 13 05:11:30.578813 kernel: NET: Registered PF_INET6 protocol family Feb 13 05:11:30.578819 kernel: Segment Routing with IPv6 Feb 13 05:11:30.578824 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 05:11:30.578829 kernel: NET: Registered PF_PACKET protocol family Feb 13 05:11:30.578835 kernel: Key type dns_resolver registered Feb 13 05:11:30.578840 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 05:11:30.578846 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 05:11:30.578851 kernel: IPI shorthand broadcast: enabled Feb 13 05:11:30.578856 kernel: sched_clock: Marking stable (1847537671, 1355320589)->(4627192275, -1424334015) Feb 13 05:11:30.578862 kernel: registered taskstats version 1 Feb 13 05:11:30.578867 kernel: Loading compiled-in X.509 certificates Feb 13 05:11:30.578872 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 05:11:30.578877 kernel: Key type .fscrypt registered Feb 13 05:11:30.578884 kernel: Key type fscrypt-provisioning registered Feb 13 05:11:30.578889 kernel: pstore: Using crash dump compression: deflate Feb 13 05:11:30.578894 kernel: ima: Allocated hash algorithm: sha1 Feb 13 05:11:30.578899 kernel: ima: No architecture policies found Feb 13 05:11:30.578905 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 05:11:30.578910 kernel: Write protecting the kernel read-only data: 28672k Feb 13 05:11:30.578915 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 05:11:30.578921 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 05:11:30.578927 kernel: Run /init as init process Feb 13 05:11:30.578932 kernel: with arguments: Feb 13 05:11:30.578938 kernel: /init Feb 13 05:11:30.578943 kernel: with environment: Feb 13 05:11:30.578948 kernel: HOME=/ Feb 13 05:11:30.578953 kernel: TERM=linux Feb 13 05:11:30.578959 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 05:11:30.578965 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 05:11:30.578973 systemd[1]: Detected architecture x86-64. Feb 13 05:11:30.578979 systemd[1]: Running in initrd. Feb 13 05:11:30.578984 systemd[1]: No hostname configured, using default hostname. Feb 13 05:11:30.578989 systemd[1]: Hostname set to . Feb 13 05:11:30.578994 systemd[1]: Initializing machine ID from random generator. Feb 13 05:11:30.579000 systemd[1]: Queued start job for default target initrd.target. Feb 13 05:11:30.579006 systemd[1]: Started systemd-ask-password-console.path. Feb 13 05:11:30.579011 systemd[1]: Reached target cryptsetup.target. Feb 13 05:11:30.579018 systemd[1]: Reached target ignition-diskful-subsequent.target. Feb 13 05:11:30.579023 systemd[1]: Reached target paths.target. Feb 13 05:11:30.579028 systemd[1]: Reached target slices.target. Feb 13 05:11:30.579034 systemd[1]: Reached target swap.target. Feb 13 05:11:30.579039 systemd[1]: Reached target timers.target. Feb 13 05:11:30.579045 systemd[1]: Listening on iscsid.socket. Feb 13 05:11:30.579050 systemd[1]: Listening on iscsiuio.socket. Feb 13 05:11:30.579056 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 05:11:30.579062 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 05:11:30.579068 systemd[1]: Listening on systemd-journald.socket. Feb 13 05:11:30.579073 kernel: tsc: Refined TSC clocksource calibration: 3408.000 MHz Feb 13 05:11:30.579079 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 05:11:30.579084 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 13 05:11:30.579090 kernel: clocksource: Switched to clocksource tsc Feb 13 05:11:30.579095 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 05:11:30.579101 systemd[1]: Reached target sockets.target. Feb 13 05:11:30.579107 systemd[1]: Starting iscsiuio.service... Feb 13 05:11:30.579113 systemd[1]: Starting kmod-static-nodes.service... Feb 13 05:11:30.579118 kernel: SCSI subsystem initialized Feb 13 05:11:30.579123 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 05:11:30.579129 kernel: Loading iSCSI transport class v2.0-870. Feb 13 05:11:30.579134 systemd[1]: Starting systemd-journald.service... Feb 13 05:11:30.579140 systemd[1]: Starting systemd-modules-load.service... Feb 13 05:11:30.579147 systemd-journald[268]: Journal started Feb 13 05:11:30.579175 systemd-journald[268]: Runtime Journal (/run/log/journal/1d39464239f0421eb295a1a13a18cd5f) is 8.0M, max 639.3M, 631.3M free. Feb 13 05:11:30.580868 systemd-modules-load[269]: Inserted module 'overlay' Feb 13 05:11:30.605342 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 05:11:30.638335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 05:11:30.638350 systemd[1]: Started iscsiuio.service. Feb 13 05:11:30.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.664393 kernel: Bridge firewalling registered Feb 13 05:11:30.664423 kernel: audit: type=1130 audit(1707801090.662:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.664441 systemd[1]: Started systemd-journald.service. Feb 13 05:11:30.723576 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 13 05:11:30.840392 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 05:11:30.840422 kernel: audit: type=1130 audit(1707801090.740:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.840442 kernel: device-mapper: uevent: version 1.0.3 Feb 13 05:11:30.840469 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 05:11:30.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.742468 systemd[1]: Finished kmod-static-nodes.service. Feb 13 05:11:30.887661 kernel: audit: type=1130 audit(1707801090.842:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.844328 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 05:11:30.939857 kernel: audit: type=1130 audit(1707801090.895:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.888208 systemd-modules-load[269]: Inserted module 'dm_multipath' Feb 13 05:11:30.993382 kernel: audit: type=1130 audit(1707801090.947:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.896630 systemd[1]: Finished systemd-modules-load.service. Feb 13 05:11:31.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:30.968956 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 05:11:31.001900 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 05:11:31.048130 systemd[1]: Starting systemd-sysctl.service... Feb 13 05:11:31.048411 kernel: audit: type=1130 audit(1707801091.000:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.048450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 05:11:31.051186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 05:11:31.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.051793 systemd[1]: Finished systemd-sysctl.service. Feb 13 05:11:31.100521 kernel: audit: type=1130 audit(1707801091.049:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.113664 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 05:11:31.219328 kernel: audit: type=1130 audit(1707801091.112:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.219346 kernel: audit: type=1130 audit(1707801091.168:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.169923 systemd[1]: Starting dracut-cmdline.service... Feb 13 05:11:31.249443 kernel: iscsi: registered transport (tcp) Feb 13 05:11:31.249454 dracut-cmdline[292]: dracut-dracut-053 Feb 13 05:11:31.249454 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 05:11:31.249454 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 05:11:31.322589 kernel: iscsi: registered transport (qla4xxx) Feb 13 05:11:31.322602 kernel: QLogic iSCSI HBA Driver Feb 13 05:11:31.310310 systemd[1]: Finished dracut-cmdline.service. Feb 13 05:11:31.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.349285 systemd[1]: Starting dracut-pre-udev.service... Feb 13 05:11:31.362927 systemd[1]: Starting iscsid.service... Feb 13 05:11:31.376635 systemd[1]: Started iscsid.service. Feb 13 05:11:31.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.400799 iscsid[446]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 05:11:31.400799 iscsid[446]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 05:11:31.400799 iscsid[446]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 05:11:31.400799 iscsid[446]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 05:11:31.400799 iscsid[446]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 05:11:31.400799 iscsid[446]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 05:11:31.400799 iscsid[446]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 05:11:31.554455 kernel: raid6: avx2x4 gen() 26849 MB/s Feb 13 05:11:31.554468 kernel: raid6: avx2x4 xor() 21304 MB/s Feb 13 05:11:31.554475 kernel: raid6: avx2x2 gen() 53559 MB/s Feb 13 05:11:31.554482 kernel: raid6: avx2x2 xor() 32103 MB/s Feb 13 05:11:31.554490 kernel: raid6: avx2x1 gen() 45077 MB/s Feb 13 05:11:31.597412 kernel: raid6: avx2x1 xor() 27912 MB/s Feb 13 05:11:31.632409 kernel: raid6: sse2x4 gen() 21295 MB/s Feb 13 05:11:31.667413 kernel: raid6: sse2x4 xor() 11984 MB/s Feb 13 05:11:31.702366 kernel: raid6: sse2x2 gen() 21665 MB/s Feb 13 05:11:31.737364 kernel: raid6: sse2x2 xor() 13467 MB/s Feb 13 05:11:31.770401 kernel: raid6: sse2x1 gen() 18300 MB/s Feb 13 05:11:31.823176 kernel: raid6: sse2x1 xor() 8948 MB/s Feb 13 05:11:31.823191 kernel: raid6: using algorithm avx2x2 gen() 53559 MB/s Feb 13 05:11:31.823198 kernel: raid6: .... xor() 32103 MB/s, rmw enabled Feb 13 05:11:31.841646 kernel: raid6: using avx2x2 recovery algorithm Feb 13 05:11:31.888384 kernel: xor: automatically using best checksumming function avx Feb 13 05:11:31.967372 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 05:11:31.971856 systemd[1]: Finished dracut-pre-udev.service. Feb 13 05:11:31.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.980000 audit: BPF prog-id=6 op=LOAD Feb 13 05:11:31.980000 audit: BPF prog-id=7 op=LOAD Feb 13 05:11:31.982323 systemd[1]: Starting systemd-udevd.service... Feb 13 05:11:31.989819 systemd-udevd[471]: Using default interface naming scheme 'v252'. Feb 13 05:11:32.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:31.997632 systemd[1]: Started systemd-udevd.service. Feb 13 05:11:32.037454 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Feb 13 05:11:32.013982 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 05:11:32.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:32.044195 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 05:11:32.055557 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 05:11:32.105665 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 05:11:32.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:32.124867 systemd[1]: Starting dracut-initqueue.service... Feb 13 05:11:32.142355 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 05:11:32.142371 kernel: libata version 3.00 loaded. Feb 13 05:11:32.179516 kernel: ACPI: bus type USB registered Feb 13 05:11:32.179546 kernel: usbcore: registered new interface driver usbfs Feb 13 05:11:32.179554 kernel: usbcore: registered new interface driver hub Feb 13 05:11:32.179562 kernel: usbcore: registered new device driver usb Feb 13 05:11:32.256958 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 05:11:32.257000 kernel: AES CTR mode by8 optimization enabled Feb 13 05:11:32.257010 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 05:11:32.295069 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 05:11:32.295087 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 13 05:11:32.295157 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 05:11:32.295167 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 05:11:32.352384 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 05:11:32.352468 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Feb 13 05:11:32.352531 kernel: pps pps0: new PPS source ptp0 Feb 13 05:11:32.352583 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 13 05:11:32.352637 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 05:11:32.352686 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:2c Feb 13 05:11:32.352735 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 13 05:11:32.352783 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 05:11:32.356337 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 05:11:32.356409 kernel: scsi host0: ahci Feb 13 05:11:32.356479 kernel: scsi host1: ahci Feb 13 05:11:32.356537 kernel: scsi host2: ahci Feb 13 05:11:32.356591 kernel: scsi host3: ahci Feb 13 05:11:32.356655 kernel: scsi host4: ahci Feb 13 05:11:32.356706 kernel: scsi host5: ahci Feb 13 05:11:32.356756 kernel: scsi host6: ahci Feb 13 05:11:32.356808 kernel: scsi host7: ahci Feb 13 05:11:32.356857 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 129 Feb 13 05:11:32.356864 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 129 Feb 13 05:11:32.356871 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 129 Feb 13 05:11:32.356877 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 129 Feb 13 05:11:32.356884 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 129 Feb 13 05:11:32.356891 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 129 Feb 13 05:11:32.356898 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 129 Feb 13 05:11:32.356904 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 129 Feb 13 05:11:32.386365 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 05:11:32.392380 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 05:11:32.392448 kernel: pps pps1: new PPS source ptp1 Feb 13 05:11:32.392511 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 13 05:11:32.392566 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 05:11:32.392618 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:2d Feb 13 05:11:32.392669 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 13 05:11:32.392717 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 05:11:32.664356 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 05:11:32.664447 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 05:11:32.664456 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 05:11:32.665374 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 05:11:32.665391 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 05:11:32.665401 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 05:11:32.665408 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 05:11:32.665415 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 05:11:32.665421 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 05:11:32.665428 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 05:11:32.665434 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 13 05:11:32.666404 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 05:11:32.669404 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 05:11:32.669419 kernel: ata1.00: Features: NCQ-prio Feb 13 05:11:32.669428 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 05:11:32.669436 kernel: ata2.00: Features: NCQ-prio Feb 13 05:11:32.674337 kernel: ata1.00: configured for UDMA/133 Feb 13 05:11:32.674352 kernel: ata2.00: configured for UDMA/133 Feb 13 05:11:32.674360 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 05:11:32.674431 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 05:11:32.721011 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 13 05:11:32.721082 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 05:11:33.055784 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 05:11:33.055808 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 05:11:33.055899 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:11:33.055908 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 05:11:33.056008 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 05:11:33.056107 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 05:11:33.056183 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 05:11:33.056265 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 05:11:33.056329 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 05:11:33.056389 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:11:33.056397 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 05:11:33.056403 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 05:11:33.080683 kernel: hub 1-0:1.0: USB hub found Feb 13 05:11:33.080780 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 05:11:33.080837 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 05:11:33.096382 kernel: hub 1-0:1.0: 16 ports detected Feb 13 05:11:33.096454 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 13 05:11:33.119810 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 05:11:33.146408 kernel: hub 2-0:1.0: USB hub found Feb 13 05:11:33.146483 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 05:11:33.146542 kernel: hub 2-0:1.0: 10 ports detected Feb 13 05:11:33.186905 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 05:11:33.186920 kernel: usb: port power management may be unreliable Feb 13 05:11:33.191385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 05:11:33.191399 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 05:11:33.391402 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 05:11:33.391428 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 05:11:33.403898 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 05:11:33.438374 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 05:11:33.441832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 05:11:33.509450 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by (udev-worker) (522) Feb 13 05:11:33.509463 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Feb 13 05:11:33.509536 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 05:11:33.510931 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 05:11:33.556418 kernel: hub 1-14:1.0: USB hub found Feb 13 05:11:33.556493 kernel: hub 1-14:1.0: 4 ports detected Feb 13 05:11:33.523172 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 05:11:33.548578 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 05:11:33.564436 systemd[1]: Reached target initrd-root-device.target. Feb 13 05:11:33.583820 systemd[1]: Starting disk-uuid.service... Feb 13 05:11:33.603527 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 05:11:33.719437 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 13 05:11:33.719449 kernel: audit: type=1130 audit(1707801093.616:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:33.719458 kernel: audit: type=1131 audit(1707801093.616:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:33.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:33.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:33.603568 systemd[1]: Finished disk-uuid.service. Feb 13 05:11:33.617521 systemd[1]: Reached target local-fs-pre.target. Feb 13 05:11:33.727420 systemd[1]: Reached target local-fs.target. Feb 13 05:11:33.727454 systemd[1]: Reached target sysinit.target. Feb 13 05:11:33.739460 systemd[1]: Reached target basic.target. Feb 13 05:11:33.772816 systemd[1]: Starting verity-setup.service... Feb 13 05:11:33.849419 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 05:11:33.849433 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 05:11:33.849510 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 13 05:11:33.855713 systemd[1]: Found device dev-mapper-usr.device. Feb 13 05:11:33.902437 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 05:11:33.902461 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 05:11:33.889202 systemd[1]: Mounting sysusr-usr.mount... Feb 13 05:11:33.909474 systemd[1]: Finished verity-setup.service. Feb 13 05:11:33.983423 kernel: audit: type=1130 audit(1707801093.924:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:33.983438 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 05:11:33.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:33.995883 systemd[1]: Mounted sysusr-usr.mount. Feb 13 05:11:34.029458 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 05:11:34.029469 kernel: usbcore: registered new interface driver usbhid Feb 13 05:11:34.053413 kernel: usbhid: USB HID core driver Feb 13 05:11:34.091497 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 05:11:34.121514 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 05:11:34.153398 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Feb 13 05:11:34.153498 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 05:11:34.230127 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 05:11:34.270995 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 05:11:34.271136 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Feb 13 05:11:34.307758 systemd[1]: Finished dracut-initqueue.service. Feb 13 05:11:34.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.316644 systemd[1]: Reached target remote-fs-pre.target. Feb 13 05:11:34.389552 kernel: audit: type=1130 audit(1707801094.315:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.375544 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 05:11:34.375577 systemd[1]: Reached target remote-fs.target. Feb 13 05:11:34.390017 systemd[1]: Starting dracut-pre-mount.service... Feb 13 05:11:34.415606 systemd[1]: Finished dracut-pre-mount.service. Feb 13 05:11:34.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.433119 systemd[1]: Starting systemd-fsck-root.service... Feb 13 05:11:34.485391 kernel: audit: type=1130 audit(1707801094.431:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.494064 systemd-fsck[728]: ROOT: clean, 631/553520 files, 110552/553472 blocks Feb 13 05:11:34.507889 systemd[1]: Finished systemd-fsck-root.service. Feb 13 05:11:34.598959 kernel: audit: type=1130 audit(1707801094.515:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.599047 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 05:11:34.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.518041 systemd[1]: Mounting sysroot.mount... Feb 13 05:11:34.606947 systemd[1]: Mounted sysroot.mount. Feb 13 05:11:34.622595 systemd[1]: Reached target initrd-root-fs.target. Feb 13 05:11:34.630227 systemd[1]: Mounting sysroot-usr.mount... Feb 13 05:11:34.655195 systemd[1]: Mounted sysroot-usr.mount. Feb 13 05:11:34.665154 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 05:11:34.768454 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 05:11:34.768469 kernel: BTRFS info (device sda6): using free space tree Feb 13 05:11:34.768476 kernel: BTRFS info (device sda6): has skinny extents Feb 13 05:11:34.768483 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 05:11:34.684482 systemd[1]: Starting initrd-setup-root.service... Feb 13 05:11:34.777641 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 05:11:34.795634 systemd[1]: Finished initrd-setup-root.service. Feb 13 05:11:34.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.813414 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 05:11:34.884580 kernel: audit: type=1130 audit(1707801094.811:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.874640 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 05:11:34.958598 kernel: audit: type=1130 audit(1707801094.893:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.958646 initrd-setup-root-after-ignition[811]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 05:11:34.894719 systemd[1]: Reached target ignition-subsequent.target. Feb 13 05:11:34.968034 systemd[1]: Starting initrd-parse-etc.service... Feb 13 05:11:35.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.994692 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 05:11:35.081603 kernel: audit: type=1130 audit(1707801095.003:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:34.994746 systemd[1]: Finished initrd-parse-etc.service. Feb 13 05:11:35.004821 systemd[1]: Reached target initrd-fs.target. Feb 13 05:11:35.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.067565 systemd[1]: Reached target initrd.target. Feb 13 05:11:35.067624 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 05:11:35.067979 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 05:11:35.088680 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 05:11:35.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.104904 systemd[1]: Starting initrd-cleanup.service... Feb 13 05:11:35.121797 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 05:11:35.133618 systemd[1]: Stopped target timers.target. Feb 13 05:11:35.151708 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 05:11:35.151910 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 05:11:35.168175 systemd[1]: Stopped target initrd.target. Feb 13 05:11:35.181800 systemd[1]: Stopped target basic.target. Feb 13 05:11:35.196018 systemd[1]: Stopped target ignition-subsequent.target. Feb 13 05:11:35.212889 systemd[1]: Stopped target ignition-diskful-subsequent.target. Feb 13 05:11:35.229883 systemd[1]: Stopped target initrd-root-device.target. Feb 13 05:11:35.247008 systemd[1]: Stopped target paths.target. Feb 13 05:11:35.261005 systemd[1]: Stopped target remote-fs.target. Feb 13 05:11:35.275883 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 05:11:35.291003 systemd[1]: Stopped target slices.target. Feb 13 05:11:35.306997 systemd[1]: Stopped target sockets.target. Feb 13 05:11:35.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.324007 systemd[1]: Stopped target sysinit.target. Feb 13 05:11:35.340897 systemd[1]: Stopped target local-fs.target. Feb 13 05:11:35.355885 systemd[1]: Stopped target local-fs-pre.target. Feb 13 05:11:35.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.370851 systemd[1]: Stopped target swap.target. Feb 13 05:11:35.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.386937 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 05:11:35.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.486622 iscsid[446]: iscsid shutting down. Feb 13 05:11:35.387269 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 05:11:35.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.402100 systemd[1]: Stopped target cryptsetup.target. Feb 13 05:11:35.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.416777 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 05:11:35.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.420581 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 05:11:35.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.431746 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 05:11:35.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.432075 systemd[1]: Stopped dracut-initqueue.service. Feb 13 05:11:35.447021 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 05:11:35.447371 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 05:11:35.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.463985 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 05:11:35.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.464295 systemd[1]: Stopped initrd-setup-root.service. Feb 13 05:11:35.479322 systemd[1]: Stopping iscsid.service... Feb 13 05:11:35.494551 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 05:11:35.494626 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 05:11:35.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.510595 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 05:11:35.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.510674 systemd[1]: Stopped systemd-sysctl.service. Feb 13 05:11:35.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.525717 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 05:11:35.525824 systemd[1]: Stopped systemd-modules-load.service. Feb 13 05:11:35.540732 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 05:11:35.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.540892 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 05:11:35.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.559086 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 05:11:35.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.559394 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 05:11:35.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:35.576327 systemd[1]: Stopping systemd-udevd.service... Feb 13 05:11:35.591893 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 05:11:35.592300 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 05:11:35.592354 systemd[1]: Stopped iscsid.service. Feb 13 05:11:35.612821 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 05:11:35.612904 systemd[1]: Stopped systemd-udevd.service. Feb 13 05:11:35.629198 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 05:11:35.629296 systemd[1]: Closed iscsid.socket. Feb 13 05:11:35.645753 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 05:11:35.645822 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 05:11:35.660776 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 05:11:35.660857 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 05:11:35.675686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 05:11:35.675806 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 05:11:35.693856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 05:11:35.693991 systemd[1]: Stopped dracut-cmdline.service. Feb 13 05:11:35.709856 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 05:11:35.709988 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 05:11:35.727592 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 05:11:35.740801 systemd[1]: Stopping iscsiuio.service... Feb 13 05:11:35.757540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 05:11:35.757773 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 05:11:35.774642 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 05:11:35.774959 systemd[1]: Stopped iscsiuio.service. Feb 13 05:11:35.791387 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 05:11:35.791596 systemd[1]: Finished initrd-cleanup.service. Feb 13 05:11:35.807162 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 05:11:35.807371 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 05:11:35.826788 systemd[1]: Reached target initrd-switch-root.target. Feb 13 05:11:35.839721 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 05:11:35.929351 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Feb 13 05:11:35.839808 systemd[1]: Closed iscsiuio.socket. Feb 13 05:11:35.855268 systemd[1]: Starting initrd-switch-root.service... Feb 13 05:11:35.883290 systemd[1]: Switching root. Feb 13 05:11:35.929483 systemd-journald[268]: Journal stopped Feb 13 05:11:39.855937 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 05:11:39.855951 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 05:11:39.855959 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 05:11:39.855964 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 05:11:39.855969 kernel: SELinux: policy capability open_perms=1 Feb 13 05:11:39.855974 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 05:11:39.855981 kernel: SELinux: policy capability always_check_network=0 Feb 13 05:11:39.855987 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 05:11:39.855992 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 05:11:39.855997 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 05:11:39.856002 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 05:11:39.856008 systemd[1]: Successfully loaded SELinux policy in 303.304ms. Feb 13 05:11:39.856015 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.369ms. Feb 13 05:11:39.856022 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 05:11:39.856030 systemd[1]: Detected architecture x86-64. Feb 13 05:11:39.856036 systemd[1]: Detected first boot. Feb 13 05:11:39.856041 systemd[1]: Hostname set to . Feb 13 05:11:39.856048 systemd[1]: Initializing machine ID from random generator. Feb 13 05:11:39.856054 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 05:11:39.856060 systemd[1]: Populated /etc with preset unit settings. Feb 13 05:11:39.856066 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 05:11:39.856073 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 05:11:39.856080 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 05:11:39.856086 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 05:11:39.856092 systemd[1]: Stopped initrd-switch-root.service. Feb 13 05:11:39.856098 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 05:11:39.856105 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 05:11:39.856112 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 05:11:39.856118 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 05:11:39.856124 systemd[1]: Created slice system-getty.slice. Feb 13 05:11:39.856130 systemd[1]: Created slice system-modprobe.slice. Feb 13 05:11:39.856136 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 05:11:39.856142 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 05:11:39.856148 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 05:11:39.856155 systemd[1]: Created slice user.slice. Feb 13 05:11:39.856161 systemd[1]: Started systemd-ask-password-console.path. Feb 13 05:11:39.856167 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 05:11:39.856173 systemd[1]: Set up automount boot.automount. Feb 13 05:11:39.856179 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 05:11:39.856185 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 05:11:39.856193 systemd[1]: Stopped target initrd-fs.target. Feb 13 05:11:39.856199 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 05:11:39.856205 systemd[1]: Reached target integritysetup.target. Feb 13 05:11:39.856213 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 05:11:39.856219 systemd[1]: Reached target remote-fs.target. Feb 13 05:11:39.856225 systemd[1]: Reached target slices.target. Feb 13 05:11:39.856231 systemd[1]: Reached target swap.target. Feb 13 05:11:39.856238 systemd[1]: Reached target torcx.target. Feb 13 05:11:39.856244 systemd[1]: Reached target veritysetup.target. Feb 13 05:11:39.856250 systemd[1]: Listening on systemd-coredump.socket. Feb 13 05:11:39.856256 systemd[1]: Listening on systemd-initctl.socket. Feb 13 05:11:39.856263 systemd[1]: Listening on systemd-networkd.socket. Feb 13 05:11:39.856271 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 05:11:39.856277 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 05:11:39.856283 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 05:11:39.856291 systemd[1]: Mounting dev-hugepages.mount... Feb 13 05:11:39.856297 systemd[1]: Mounting dev-mqueue.mount... Feb 13 05:11:39.856303 systemd[1]: Mounting media.mount... Feb 13 05:11:39.856310 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 05:11:39.856316 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 05:11:39.856322 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 05:11:39.856329 systemd[1]: Mounting tmp.mount... Feb 13 05:11:39.856355 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 05:11:39.856377 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 05:11:39.856385 systemd[1]: Starting kmod-static-nodes.service... Feb 13 05:11:39.856391 systemd[1]: Starting modprobe@configfs.service... Feb 13 05:11:39.856398 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 05:11:39.856404 systemd[1]: Starting modprobe@drm.service... Feb 13 05:11:39.856411 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 05:11:39.856417 systemd[1]: Starting modprobe@fuse.service... Feb 13 05:11:39.856423 kernel: fuse: init (API version 7.34) Feb 13 05:11:39.856429 systemd[1]: Starting modprobe@loop.service... Feb 13 05:11:39.856436 kernel: loop: module loaded Feb 13 05:11:39.856443 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 05:11:39.856450 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 05:11:39.856456 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 05:11:39.856462 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 05:11:39.856469 kernel: kauditd_printk_skb: 50 callbacks suppressed Feb 13 05:11:39.856475 kernel: audit: type=1131 audit(1707801099.497:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:39.856481 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 05:11:39.856487 kernel: audit: type=1131 audit(1707801099.585:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:39.856494 systemd[1]: Stopped systemd-journald.service. Feb 13 05:11:39.856501 kernel: audit: type=1130 audit(1707801099.649:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:39.856507 kernel: audit: type=1131 audit(1707801099.649:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:39.856513 kernel: audit: type=1334 audit(1707801099.734:75): prog-id=13 op=LOAD Feb 13 05:11:39.856518 kernel: audit: type=1334 audit(1707801099.752:76): prog-id=14 op=LOAD Feb 13 05:11:39.856524 kernel: audit: type=1334 audit(1707801099.770:77): prog-id=15 op=LOAD Feb 13 05:11:39.856530 kernel: audit: type=1334 audit(1707801099.788:78): prog-id=11 op=UNLOAD Feb 13 05:11:39.856537 systemd[1]: Starting systemd-journald.service... Feb 13 05:11:39.856543 kernel: audit: type=1334 audit(1707801099.788:79): prog-id=12 op=UNLOAD Feb 13 05:11:39.856549 systemd[1]: Starting systemd-modules-load.service... Feb 13 05:11:39.856555 kernel: audit: type=1305 audit(1707801099.852:80): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 05:11:39.856563 systemd-journald[950]: Journal started Feb 13 05:11:39.856587 systemd-journald[950]: Runtime Journal (/run/log/journal/a8ee12c8dca341dcad96e22658e05d11) is 8.0M, max 639.3M, 631.3M free. Feb 13 05:11:36.375000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 05:11:36.656000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 05:11:36.658000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 05:11:36.658000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 05:11:36.658000 audit: BPF prog-id=8 op=LOAD Feb 13 05:11:36.658000 audit: BPF prog-id=8 op=UNLOAD Feb 13 05:11:36.658000 audit: BPF prog-id=9 op=LOAD Feb 13 05:11:36.658000 audit: BPF prog-id=9 op=UNLOAD Feb 13 05:11:36.725000 audit[844]: AVC avc: denied { associate } for pid=844 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 05:11:36.725000 audit[844]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001d989c a1=c00015adf8 a2=c000163ac0 a3=32 items=0 ppid=827 pid=844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:11:36.725000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 05:11:36.750000 audit[844]: AVC avc: denied { associate } for pid=844 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 05:11:36.750000 audit[844]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d9975 a2=1ed a3=0 items=2 ppid=827 pid=844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:11:36.750000 audit: CWD cwd="/" Feb 13 05:11:36.750000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:36.750000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:36.750000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 05:11:38.254000 audit: BPF prog-id=10 op=LOAD Feb 13 05:11:38.254000 audit: BPF prog-id=3 op=UNLOAD Feb 13 05:11:38.254000 audit: BPF prog-id=11 op=LOAD Feb 13 05:11:38.254000 audit: BPF prog-id=12 op=LOAD Feb 13 05:11:38.254000 audit: BPF prog-id=4 op=UNLOAD Feb 13 05:11:38.254000 audit: BPF prog-id=5 op=UNLOAD Feb 13 05:11:38.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:38.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:38.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:38.307000 audit: BPF prog-id=10 op=UNLOAD Feb 13 05:11:39.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:39.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:39.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:39.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:39.734000 audit: BPF prog-id=13 op=LOAD Feb 13 05:11:39.752000 audit: BPF prog-id=14 op=LOAD Feb 13 05:11:39.770000 audit: BPF prog-id=15 op=LOAD Feb 13 05:11:39.788000 audit: BPF prog-id=11 op=UNLOAD Feb 13 05:11:39.788000 audit: BPF prog-id=12 op=UNLOAD Feb 13 05:11:39.852000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 05:11:36.724265 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 05:11:38.253652 systemd[1]: Queued start job for default target multi-user.target. Feb 13 05:11:36.724723 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 05:11:38.253659 systemd[1]: Unnecessary job was removed for dev-sda6.device. Feb 13 05:11:36.724736 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 05:11:38.256284 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 05:11:36.724756 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 05:11:36.724762 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 05:11:36.724780 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 05:11:36.724788 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 05:11:36.724913 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 05:11:36.724937 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 05:11:36.724946 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 05:11:36.725374 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 05:11:36.725395 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 05:11:36.725407 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 05:11:36.725417 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 05:11:36.725427 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 05:11:36.725435 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 05:11:37.916143 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:37Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 05:11:37.916282 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:37Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 05:11:37.916341 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:37Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 05:11:37.916472 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:37Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 05:11:37.916502 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:37Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 05:11:37.916539 /usr/lib/systemd/system-generators/torcx-generator[844]: time="2024-02-13T05:11:37Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 05:11:39.852000 audit[950]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffee5687b80 a2=4000 a3=7ffee5687c1c items=0 ppid=1 pid=950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:11:39.852000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 05:11:39.934523 systemd[1]: Starting systemd-network-generator.service... Feb 13 05:11:39.961349 systemd[1]: Starting systemd-remount-fs.service... Feb 13 05:11:39.988393 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 05:11:40.031179 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 05:11:40.031209 systemd[1]: Stopped verity-setup.service. Feb 13 05:11:40.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.076381 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 05:11:40.096522 systemd[1]: Started systemd-journald.service. Feb 13 05:11:40.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.104850 systemd[1]: Mounted dev-hugepages.mount. Feb 13 05:11:40.111593 systemd[1]: Mounted dev-mqueue.mount. Feb 13 05:11:40.118580 systemd[1]: Mounted media.mount. Feb 13 05:11:40.125584 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 05:11:40.134582 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 05:11:40.142566 systemd[1]: Mounted tmp.mount. Feb 13 05:11:40.149638 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 05:11:40.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.157650 systemd[1]: Finished kmod-static-nodes.service. Feb 13 05:11:40.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.165673 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 05:11:40.165779 systemd[1]: Finished modprobe@configfs.service. Feb 13 05:11:40.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.174735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 05:11:40.174864 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 05:11:40.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.183861 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 05:11:40.184054 systemd[1]: Finished modprobe@drm.service. Feb 13 05:11:40.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.193072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 05:11:40.193387 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 05:11:40.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.202213 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 05:11:40.202537 systemd[1]: Finished modprobe@fuse.service. Feb 13 05:11:40.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.211128 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 05:11:40.211445 systemd[1]: Finished modprobe@loop.service. Feb 13 05:11:40.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.220170 systemd[1]: Finished systemd-modules-load.service. Feb 13 05:11:40.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.229131 systemd[1]: Finished systemd-network-generator.service. Feb 13 05:11:40.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.238214 systemd[1]: Finished systemd-remount-fs.service. Feb 13 05:11:40.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.247116 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 05:11:40.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.256663 systemd[1]: Reached target network-pre.target. Feb 13 05:11:40.267090 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 05:11:40.276071 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 05:11:40.283552 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 05:11:40.284514 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 05:11:40.292027 systemd[1]: Starting systemd-journal-flush.service... Feb 13 05:11:40.295990 systemd-journald[950]: Time spent on flushing to /var/log/journal/a8ee12c8dca341dcad96e22658e05d11 is 11.412ms for 1294 entries. Feb 13 05:11:40.295990 systemd-journald[950]: System Journal (/var/log/journal/a8ee12c8dca341dcad96e22658e05d11) is 8.0M, max 195.6M, 187.6M free. Feb 13 05:11:40.330836 systemd-journald[950]: Received client request to flush runtime journal. Feb 13 05:11:40.308440 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 05:11:40.309020 systemd[1]: Starting systemd-random-seed.service... Feb 13 05:11:40.323441 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 05:11:40.323947 systemd[1]: Starting systemd-sysctl.service... Feb 13 05:11:40.331081 systemd[1]: Starting systemd-sysusers.service... Feb 13 05:11:40.337936 systemd[1]: Starting systemd-udev-settle.service... Feb 13 05:11:40.345494 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 05:11:40.353527 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 05:11:40.361581 systemd[1]: Finished systemd-journal-flush.service. Feb 13 05:11:40.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.369551 systemd[1]: Finished systemd-random-seed.service. Feb 13 05:11:40.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.377541 systemd[1]: Finished systemd-sysctl.service. Feb 13 05:11:40.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.385544 systemd[1]: Finished systemd-sysusers.service. Feb 13 05:11:40.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.394537 systemd[1]: Reached target first-boot-complete.target. Feb 13 05:11:40.402687 udevadm[966]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 05:11:40.587845 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 05:11:40.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.595000 audit: BPF prog-id=16 op=LOAD Feb 13 05:11:40.595000 audit: BPF prog-id=17 op=LOAD Feb 13 05:11:40.595000 audit: BPF prog-id=6 op=UNLOAD Feb 13 05:11:40.595000 audit: BPF prog-id=7 op=UNLOAD Feb 13 05:11:40.597558 systemd[1]: Starting systemd-udevd.service... Feb 13 05:11:40.609353 systemd-udevd[967]: Using default interface naming scheme 'v252'. Feb 13 05:11:40.627216 systemd[1]: Started systemd-udevd.service. Feb 13 05:11:40.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.637435 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 05:11:40.636000 audit: BPF prog-id=18 op=LOAD Feb 13 05:11:40.638711 systemd[1]: Starting systemd-networkd.service... Feb 13 05:11:40.661000 audit: BPF prog-id=19 op=LOAD Feb 13 05:11:40.681006 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 05:11:40.681085 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 05:11:40.679000 audit: BPF prog-id=20 op=LOAD Feb 13 05:11:40.679000 audit: BPF prog-id=21 op=LOAD Feb 13 05:11:40.669000 audit[974]: AVC avc: denied { confidentiality } for pid=974 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 05:11:40.703049 systemd[1]: Starting systemd-userdbd.service... Feb 13 05:11:40.703338 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 05:11:40.709346 kernel: IPMI message handler: version 39.2 Feb 13 05:11:40.709408 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 05:11:40.709435 kernel: ACPI: button: Power Button [PWRF] Feb 13 05:11:40.720498 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 05:11:40.669000 audit[974]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562c650ad790 a1=4d8bc a2=7f643aad7bc5 a3=5 items=42 ppid=967 pid=974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:11:40.669000 audit: CWD cwd="/" Feb 13 05:11:40.669000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=1 name=(null) inode=14473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=2 name=(null) inode=14473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=3 name=(null) inode=14474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=4 name=(null) inode=14473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=5 name=(null) inode=14475 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=6 name=(null) inode=14473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=7 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=8 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=9 name=(null) inode=14477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=10 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=11 name=(null) inode=14478 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=12 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=13 name=(null) inode=14479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=14 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=15 name=(null) inode=14480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=16 name=(null) inode=14476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=17 name=(null) inode=14481 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=18 name=(null) inode=14473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=19 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=20 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=21 name=(null) inode=14483 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=22 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=23 name=(null) inode=14484 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=24 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=25 name=(null) inode=14485 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=26 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=27 name=(null) inode=14486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=28 name=(null) inode=14482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=29 name=(null) inode=14487 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=30 name=(null) inode=14473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=31 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=32 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=33 name=(null) inode=14489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=34 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=35 name=(null) inode=14490 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=36 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=37 name=(null) inode=14491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=38 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=39 name=(null) inode=14492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=40 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PATH item=41 name=(null) inode=14493 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 05:11:40.669000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 05:11:40.800595 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 05:11:40.800846 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 05:11:40.821344 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 05:11:40.821700 kernel: ipmi device interface Feb 13 05:11:40.846586 systemd[1]: Started systemd-userdbd.service. Feb 13 05:11:40.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:40.869375 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 05:11:40.869474 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 05:11:40.893338 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 05:11:40.951749 kernel: ipmi_si: IPMI System Interface driver Feb 13 05:11:40.951774 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 05:11:40.951858 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 05:11:40.972043 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 05:11:41.009939 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 05:11:41.010182 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 05:11:41.010262 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 13 05:11:41.098031 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 05:11:41.098178 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 05:11:41.098196 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 05:11:41.180587 kernel: intel_rapl_common: Found RAPL domain package Feb 13 05:11:41.180630 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 05:11:41.180715 kernel: intel_rapl_common: Found RAPL domain core Feb 13 05:11:41.213843 kernel: intel_rapl_common: Found RAPL domain uncore Feb 13 05:11:41.213868 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 05:11:41.229793 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 13 05:11:41.291341 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 05:11:41.309636 systemd-networkd[1009]: bond0: netdev ready Feb 13 05:11:41.310338 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 05:11:41.312116 systemd-networkd[1009]: lo: Link UP Feb 13 05:11:41.312118 systemd-networkd[1009]: lo: Gained carrier Feb 13 05:11:41.312463 systemd-networkd[1009]: Enumeration completed Feb 13 05:11:41.312543 systemd[1]: Started systemd-networkd.service. Feb 13 05:11:41.312754 systemd-networkd[1009]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 05:11:41.317343 systemd-networkd[1009]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d7:77:67.network. Feb 13 05:11:41.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:41.320580 systemd[1]: Finished systemd-udev-settle.service. Feb 13 05:11:41.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:41.329231 systemd[1]: Starting lvm2-activation-early.service... Feb 13 05:11:41.344635 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 05:11:41.375739 systemd[1]: Finished lvm2-activation-early.service. Feb 13 05:11:41.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:41.383418 systemd[1]: Reached target cryptsetup.target. Feb 13 05:11:41.391915 systemd[1]: Starting lvm2-activation.service... Feb 13 05:11:41.393947 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 05:11:41.422746 systemd[1]: Finished lvm2-activation.service. Feb 13 05:11:41.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:41.431450 systemd[1]: Reached target local-fs-pre.target. Feb 13 05:11:41.439417 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 05:11:41.439431 systemd[1]: Reached target local-fs.target. Feb 13 05:11:41.448394 systemd[1]: Reached target machines.target. Feb 13 05:11:41.457949 systemd[1]: Starting ldconfig.service... Feb 13 05:11:41.464927 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 05:11:41.464949 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 05:11:41.465472 systemd[1]: Starting systemd-boot-update.service... Feb 13 05:11:41.472808 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 05:11:41.483974 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 05:11:41.484079 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 05:11:41.484106 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 05:11:41.484708 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 05:11:41.484940 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1073 (bootctl) Feb 13 05:11:41.485649 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 05:11:41.497320 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 05:11:41.497616 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 05:11:41.499744 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 05:11:41.504060 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 05:11:41.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:41.504718 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 05:11:41.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:41.508898 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 05:11:41.561904 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Feb 13 05:11:41.561904 systemd-fsck[1081]: /dev/sda1: 789 files, 115339/258078 clusters Feb 13 05:11:41.562656 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 05:11:41.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:41.574196 systemd[1]: Mounting boot.mount... Feb 13 05:11:41.594793 systemd[1]: Mounted boot.mount. Feb 13 05:11:41.612646 systemd[1]: Finished systemd-boot-update.service. Feb 13 05:11:41.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:41.641669 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 05:11:41.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:41.650154 systemd[1]: Starting audit-rules.service... Feb 13 05:11:41.657936 systemd[1]: Starting clean-ca-certificates.service... Feb 13 05:11:41.666920 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 05:11:41.670000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 05:11:41.670000 audit[1104]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef9f080f0 a2=420 a3=0 items=0 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:11:41.670000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 05:11:41.672285 augenrules[1104]: No rules Feb 13 05:11:41.677347 systemd[1]: Starting systemd-resolved.service... Feb 13 05:11:41.686387 systemd[1]: Starting systemd-timesyncd.service... Feb 13 05:11:41.694918 systemd[1]: Starting systemd-update-utmp.service... Feb 13 05:11:41.704555 systemd[1]: Finished audit-rules.service. Feb 13 05:11:41.711516 systemd[1]: Finished clean-ca-certificates.service. Feb 13 05:11:41.720584 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 05:11:41.738379 systemd[1]: Finished systemd-update-utmp.service. Feb 13 05:11:41.743395 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 05:11:41.759578 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 05:11:41.768342 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 13 05:11:41.769439 systemd-networkd[1009]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d7:77:66.network. Feb 13 05:11:41.775897 ldconfig[1072]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 05:11:41.778832 systemd[1]: Finished ldconfig.service. Feb 13 05:11:41.787422 systemd[1]: Starting systemd-update-done.service... Feb 13 05:11:41.794558 systemd[1]: Finished systemd-update-done.service. Feb 13 05:11:41.802691 systemd-resolved[1109]: Positive Trust Anchors: Feb 13 05:11:41.802697 systemd-resolved[1109]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 05:11:41.802716 systemd-resolved[1109]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 05:11:41.803448 systemd[1]: Started systemd-timesyncd.service. Feb 13 05:11:41.812455 systemd[1]: Reached target time-set.target. Feb 13 05:11:41.821120 systemd-resolved[1109]: Using system hostname 'ci-3510.3.2-a-42864312d6'. Feb 13 05:11:41.855520 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 05:11:41.973409 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 05:11:41.974025 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 05:11:41.999408 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 13 05:11:42.039387 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 05:11:42.040270 systemd-networkd[1009]: bond0: Link UP Feb 13 05:11:42.040573 systemd-networkd[1009]: enp2s0f1np1: Link UP Feb 13 05:11:42.040814 systemd-networkd[1009]: enp2s0f0np0: Link UP Feb 13 05:11:42.041022 systemd-networkd[1009]: enp2s0f1np1: Gained carrier Feb 13 05:11:42.041390 systemd[1]: Started systemd-resolved.service. Feb 13 05:11:42.042416 systemd-networkd[1009]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d7:77:66.network. Feb 13 05:11:42.049467 systemd[1]: Reached target network.target. Feb 13 05:11:42.057449 systemd[1]: Reached target nss-lookup.target. Feb 13 05:11:42.065426 systemd[1]: Reached target sysinit.target. Feb 13 05:11:42.080491 systemd[1]: Started motdgen.path. Feb 13 05:11:42.086337 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.099458 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 05:11:42.107354 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.123501 systemd[1]: Started logrotate.timer. Feb 13 05:11:42.127385 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.140481 systemd[1]: Started mdadm.timer. Feb 13 05:11:42.148387 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.161433 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 05:11:42.168394 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.183419 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 05:11:42.183436 systemd[1]: Reached target paths.target. Feb 13 05:11:42.188390 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.201418 systemd[1]: Reached target timers.target. Feb 13 05:11:42.208386 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.221548 systemd[1]: Listening on dbus.socket. Feb 13 05:11:42.227484 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.241981 systemd[1]: Starting docker.socket... Feb 13 05:11:42.247381 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.261865 systemd[1]: Listening on sshd.socket. Feb 13 05:11:42.266381 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.279496 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 05:11:42.279708 systemd[1]: Listening on docker.socket. Feb 13 05:11:42.285378 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.298454 systemd[1]: Reached target sockets.target. Feb 13 05:11:42.305387 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.319417 systemd[1]: Reached target basic.target. Feb 13 05:11:42.323369 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.336444 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 05:11:42.336457 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 05:11:42.336899 systemd[1]: Starting containerd.service... Feb 13 05:11:42.341385 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.354826 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 05:11:42.359399 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.375994 systemd[1]: Starting coreos-metadata.service... Feb 13 05:11:42.377337 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.392063 systemd[1]: Starting dbus.service... Feb 13 05:11:42.395335 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.407893 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 05:11:42.410536 dbus-daemon[1123]: [system] SELinux support is enabled Feb 13 05:11:42.413376 systemd-networkd[1009]: bond0: Gained carrier Feb 13 05:11:42.413623 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.413654 systemd-networkd[1009]: enp2s0f0np0: Gained carrier Feb 13 05:11:42.413784 coreos-metadata[1117]: Feb 13 05:11:42.412 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 05:11:42.413882 coreos-metadata[1119]: Feb 13 05:11:42.413 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 05:11:42.414118 jq[1124]: false Feb 13 05:11:42.417033 coreos-metadata[1117]: Feb 13 05:11:42.417 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 05:11:42.417091 coreos-metadata[1119]: Feb 13 05:11:42.417 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 05:11:42.427983 systemd[1]: Starting extend-filesystems.service... Feb 13 05:11:42.428358 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Feb 13 05:11:42.428379 kernel: bond0: (slave enp2s0f1np1): link status definitely down, disabling slave Feb 13 05:11:42.428393 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 05:11:42.434964 extend-filesystems[1128]: Found sda Feb 13 05:11:42.434964 extend-filesystems[1128]: Found sda1 Feb 13 05:11:42.510450 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 05:11:42.510472 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 05:11:42.510484 kernel: bond0: active interface up! Feb 13 05:11:42.488843 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 05:11:42.510543 extend-filesystems[1128]: Found sda2 Feb 13 05:11:42.510543 extend-filesystems[1128]: Found sda3 Feb 13 05:11:42.510543 extend-filesystems[1128]: Found usr Feb 13 05:11:42.510543 extend-filesystems[1128]: Found sda4 Feb 13 05:11:42.510543 extend-filesystems[1128]: Found sda6 Feb 13 05:11:42.510543 extend-filesystems[1128]: Found sda7 Feb 13 05:11:42.510543 extend-filesystems[1128]: Found sda9 Feb 13 05:11:42.510543 extend-filesystems[1128]: Checking size of /dev/sda9 Feb 13 05:11:42.510543 extend-filesystems[1128]: Resized partition /dev/sda9 Feb 13 05:11:42.489562 systemd[1]: Starting motdgen.service... Feb 13 05:11:42.603559 extend-filesystems[1135]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 05:11:42.498501 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.498540 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.498695 systemd-networkd[1009]: enp2s0f1np1: Link DOWN Feb 13 05:11:42.498697 systemd-networkd[1009]: enp2s0f1np1: Lost carrier Feb 13 05:11:42.503974 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 05:11:42.509548 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.624971 jq[1157]: true Feb 13 05:11:42.509673 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.519042 systemd[1]: Starting prepare-critools.service... Feb 13 05:11:42.525943 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 05:11:42.550890 systemd[1]: Starting sshd-keygen.service... Feb 13 05:11:42.566642 systemd[1]: Starting systemd-logind.service... Feb 13 05:11:42.583408 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 05:11:42.583925 systemd[1]: Starting tcsd.service... Feb 13 05:11:42.588123 systemd-logind[1154]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 05:11:42.588133 systemd-logind[1154]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 05:11:42.588143 systemd-logind[1154]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 05:11:42.588240 systemd-logind[1154]: New seat seat0. Feb 13 05:11:42.595602 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 05:11:42.595961 systemd[1]: Starting update-engine.service... Feb 13 05:11:42.617100 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 05:11:42.632682 systemd[1]: Started dbus.service. Feb 13 05:11:42.639248 update_engine[1156]: I0213 05:11:42.638880 1156 main.cc:92] Flatcar Update Engine starting Feb 13 05:11:42.641974 update_engine[1156]: I0213 05:11:42.641966 1156 update_check_scheduler.cc:74] Next update check in 3m30s Feb 13 05:11:42.647543 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 05:11:42.647663 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 05:11:42.647944 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 05:11:42.648057 systemd[1]: Finished motdgen.service. Feb 13 05:11:42.649335 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 05:11:42.652460 systemd-networkd[1009]: enp2s0f1np1: Link UP Feb 13 05:11:42.652598 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.652681 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.652685 systemd-networkd[1009]: enp2s0f1np1: Gained carrier Feb 13 05:11:42.657159 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 05:11:42.657264 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 05:11:42.663500 tar[1159]: ./ Feb 13 05:11:42.663500 tar[1159]: ./loopback Feb 13 05:11:42.668600 dbus-daemon[1123]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 05:11:42.669183 tar[1160]: crictl Feb 13 05:11:42.669516 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.669575 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.669655 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:42.674456 systemd[1]: Started update-engine.service. Feb 13 05:11:42.674579 jq[1163]: false Feb 13 05:11:42.682628 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 05:11:42.682722 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 05:11:42.682871 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Feb 13 05:11:42.682951 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service being skipped. Feb 13 05:11:42.683042 systemd[1]: Started systemd-logind.service. Feb 13 05:11:42.685073 env[1164]: time="2024-02-13T05:11:42.685050305Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 05:11:42.693694 tar[1159]: ./bandwidth Feb 13 05:11:42.695284 env[1164]: time="2024-02-13T05:11:42.695264635Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 05:11:42.695359 env[1164]: time="2024-02-13T05:11:42.695348148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 05:11:42.695997 env[1164]: time="2024-02-13T05:11:42.695979315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 05:11:42.696027 env[1164]: time="2024-02-13T05:11:42.695998617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 05:11:42.696159 env[1164]: time="2024-02-13T05:11:42.696144803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 05:11:42.696195 env[1164]: time="2024-02-13T05:11:42.696160283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 05:11:42.696195 env[1164]: time="2024-02-13T05:11:42.696171214Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 05:11:42.696195 env[1164]: time="2024-02-13T05:11:42.696179481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 05:11:42.696264 env[1164]: time="2024-02-13T05:11:42.696232283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 05:11:42.696408 env[1164]: time="2024-02-13T05:11:42.696395363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 05:11:42.696510 env[1164]: time="2024-02-13T05:11:42.696494623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 05:11:42.696539 env[1164]: time="2024-02-13T05:11:42.696510393Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 05:11:42.696563 env[1164]: time="2024-02-13T05:11:42.696550849Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 05:11:42.696594 env[1164]: time="2024-02-13T05:11:42.696562293Z" level=info msg="metadata content store policy set" policy=shared Feb 13 05:11:42.696937 systemd[1]: Started locksmithd.service. Feb 13 05:11:42.711520 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 05:11:42.711611 systemd[1]: Reached target system-config.target. Feb 13 05:11:42.715336 kernel: bond0: (slave enp2s0f1np1): link status up, enabling it in 200 ms Feb 13 05:11:42.715363 kernel: bond0: (slave enp2s0f1np1): invalid new link 3 on slave Feb 13 05:11:42.719024 env[1164]: time="2024-02-13T05:11:42.719006946Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 05:11:42.719063 env[1164]: time="2024-02-13T05:11:42.719029863Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 05:11:42.719063 env[1164]: time="2024-02-13T05:11:42.719042799Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 05:11:42.719112 env[1164]: time="2024-02-13T05:11:42.719065815Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 05:11:42.719112 env[1164]: time="2024-02-13T05:11:42.719079361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 05:11:42.719112 env[1164]: time="2024-02-13T05:11:42.719091985Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 05:11:42.719112 env[1164]: time="2024-02-13T05:11:42.719103351Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 05:11:42.719205 env[1164]: time="2024-02-13T05:11:42.719115897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 05:11:42.719205 env[1164]: time="2024-02-13T05:11:42.719128352Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 05:11:42.719205 env[1164]: time="2024-02-13T05:11:42.719140540Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 05:11:42.719205 env[1164]: time="2024-02-13T05:11:42.719151876Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 05:11:42.719205 env[1164]: time="2024-02-13T05:11:42.719162745Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 05:11:42.719316 env[1164]: time="2024-02-13T05:11:42.719229097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 05:11:42.719316 env[1164]: time="2024-02-13T05:11:42.719287361Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 05:11:42.719718 env[1164]: time="2024-02-13T05:11:42.719691448Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 05:11:42.719775 env[1164]: time="2024-02-13T05:11:42.719741354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.719830 env[1164]: time="2024-02-13T05:11:42.719812858Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 05:11:42.719879 env[1164]: time="2024-02-13T05:11:42.719869894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.719908 env[1164]: time="2024-02-13T05:11:42.719880831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.719908 env[1164]: time="2024-02-13T05:11:42.719888822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.719908 env[1164]: time="2024-02-13T05:11:42.719895348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.719982 env[1164]: time="2024-02-13T05:11:42.719912263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.719982 env[1164]: time="2024-02-13T05:11:42.719919603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.719982 env[1164]: time="2024-02-13T05:11:42.719926392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.719982 env[1164]: time="2024-02-13T05:11:42.719933758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.719982 env[1164]: time="2024-02-13T05:11:42.719941703Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 05:11:42.720102 env[1164]: time="2024-02-13T05:11:42.720057784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.720102 env[1164]: time="2024-02-13T05:11:42.720067368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.720102 env[1164]: time="2024-02-13T05:11:42.720085573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.720102 env[1164]: time="2024-02-13T05:11:42.720092422Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 05:11:42.720197 env[1164]: time="2024-02-13T05:11:42.720100700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 05:11:42.720197 env[1164]: time="2024-02-13T05:11:42.720108413Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 05:11:42.720197 env[1164]: time="2024-02-13T05:11:42.720130416Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 05:11:42.720197 env[1164]: time="2024-02-13T05:11:42.720161731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 05:11:42.720332 env[1164]: time="2024-02-13T05:11:42.720299573Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720346002Z" level=info msg="Connect containerd service" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720368248Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720659465Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720717394Z" level=info msg="Start subscribing containerd event" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720750140Z" level=info msg="Start recovering state" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720794943Z" level=info msg="Start event monitor" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720798135Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720803574Z" level=info msg="Start snapshots syncer" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720812712Z" level=info msg="Start cni network conf syncer for default" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720818368Z" level=info msg="Start streaming server" Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720820389Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 05:11:42.722391 env[1164]: time="2024-02-13T05:11:42.720842456Z" level=info msg="containerd successfully booted in 0.036162s" Feb 13 05:11:42.737495 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 05:11:42.737572 systemd[1]: Reached target user-config.target. Feb 13 05:11:42.738018 tar[1159]: ./ptp Feb 13 05:11:42.746861 systemd[1]: Started containerd.service. Feb 13 05:11:42.766891 tar[1159]: ./vlan Feb 13 05:11:42.793907 tar[1159]: ./host-device Feb 13 05:11:42.806979 locksmithd[1182]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 05:11:42.819981 tar[1159]: ./tuning Feb 13 05:11:42.843437 tar[1159]: ./vrf Feb 13 05:11:42.868253 tar[1159]: ./sbr Feb 13 05:11:42.891165 tar[1159]: ./tap Feb 13 05:11:42.919848 tar[1159]: ./dhcp Feb 13 05:11:42.940339 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 05:11:42.991080 tar[1159]: ./static Feb 13 05:11:42.995739 systemd[1]: Finished prepare-critools.service. Feb 13 05:11:43.010035 tar[1159]: ./firewall Feb 13 05:11:43.039581 tar[1159]: ./macvlan Feb 13 05:11:43.067216 tar[1159]: ./dummy Feb 13 05:11:43.093740 tar[1159]: ./bridge Feb 13 05:11:43.122358 tar[1159]: ./ipvlan Feb 13 05:11:43.127335 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 05:11:43.154287 extend-filesystems[1135]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 05:11:43.154287 extend-filesystems[1135]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 05:11:43.154287 extend-filesystems[1135]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 05:11:43.191396 extend-filesystems[1128]: Resized filesystem in /dev/sda9 Feb 13 05:11:43.191396 extend-filesystems[1128]: Found sdb Feb 13 05:11:43.206393 tar[1159]: ./portmap Feb 13 05:11:43.206393 tar[1159]: ./host-local Feb 13 05:11:43.154782 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 05:11:43.154871 systemd[1]: Finished extend-filesystems.service. Feb 13 05:11:43.223612 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 05:11:43.417141 coreos-metadata[1119]: Feb 13 05:11:43.417 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 05:11:43.417325 coreos-metadata[1117]: Feb 13 05:11:43.417 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 05:11:43.553825 sshd_keygen[1153]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 05:11:43.565524 systemd[1]: Finished sshd-keygen.service. Feb 13 05:11:43.574240 systemd[1]: Starting issuegen.service... Feb 13 05:11:43.581666 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 05:11:43.581755 systemd[1]: Finished issuegen.service. Feb 13 05:11:43.589146 systemd[1]: Starting systemd-user-sessions.service... Feb 13 05:11:43.597703 systemd[1]: Finished systemd-user-sessions.service. Feb 13 05:11:43.606133 systemd[1]: Started getty@tty1.service. Feb 13 05:11:43.614052 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 05:11:43.622511 systemd[1]: Reached target getty.target. Feb 13 05:11:43.714421 systemd-networkd[1009]: bond0: Gained IPv6LL Feb 13 05:11:43.714594 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:44.098650 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:44.098864 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 05:11:44.710402 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 05:11:48.633525 login[1210]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 05:11:48.641295 systemd-logind[1154]: New session 1 of user core. Feb 13 05:11:48.641899 systemd[1]: Created slice user-500.slice. Feb 13 05:11:48.642540 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 05:11:48.643048 login[1209]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 05:11:48.645012 systemd-logind[1154]: New session 2 of user core. Feb 13 05:11:48.647565 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 05:11:48.648311 systemd[1]: Starting user@500.service... Feb 13 05:11:48.650525 (systemd)[1214]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:11:48.716911 systemd[1214]: Queued start job for default target default.target. Feb 13 05:11:48.717174 systemd[1214]: Reached target paths.target. Feb 13 05:11:48.717185 systemd[1214]: Reached target sockets.target. Feb 13 05:11:48.717192 systemd[1214]: Reached target timers.target. Feb 13 05:11:48.717199 systemd[1214]: Reached target basic.target. Feb 13 05:11:48.717232 systemd[1214]: Reached target default.target. Feb 13 05:11:48.717261 systemd[1214]: Startup finished in 63ms. Feb 13 05:11:48.717287 systemd[1]: Started user@500.service. Feb 13 05:11:48.717829 systemd[1]: Started session-1.scope. Feb 13 05:11:48.718168 systemd[1]: Started session-2.scope. Feb 13 05:11:49.539578 coreos-metadata[1117]: Feb 13 05:11:49.539 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 05:11:49.540311 coreos-metadata[1119]: Feb 13 05:11:49.539 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 05:11:50.225385 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 13 05:11:50.232336 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 13 05:11:51.539726 coreos-metadata[1117]: Feb 13 05:11:51.539 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 13 05:11:51.540637 coreos-metadata[1119]: Feb 13 05:11:51.539 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 13 05:11:51.569321 coreos-metadata[1117]: Feb 13 05:11:51.569 INFO Fetch successful Feb 13 05:11:51.569600 coreos-metadata[1119]: Feb 13 05:11:51.569 INFO Fetch successful Feb 13 05:11:51.595098 systemd[1]: Finished coreos-metadata.service. Feb 13 05:11:51.595945 unknown[1117]: wrote ssh authorized keys file for user: core Feb 13 05:11:51.596005 systemd[1]: Started packet-phone-home.service. Feb 13 05:11:51.603150 curl[1236]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 05:11:51.603299 curl[1236]: Dload Upload Total Spent Left Speed Feb 13 05:11:51.652385 update-ssh-keys[1237]: Updated "/home/core/.ssh/authorized_keys" Feb 13 05:11:51.653542 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 05:11:51.654562 systemd[1]: Reached target multi-user.target. Feb 13 05:11:51.657604 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 05:11:51.674148 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 05:11:51.674438 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 05:11:51.674830 systemd[1]: Startup finished in 2.013s (kernel) + 6.223s (initrd) + 15.622s (userspace) = 23.860s. Feb 13 05:11:51.829990 curl[1236]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 05:11:51.832508 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 05:11:58.727282 systemd[1]: Created slice system-sshd.slice. Feb 13 05:11:58.727931 systemd[1]: Started sshd@0-147.75.90.7:22-139.178.68.195:47066.service. Feb 13 05:11:58.810604 sshd[1240]: Accepted publickey for core from 139.178.68.195 port 47066 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:11:58.811370 sshd[1240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:11:58.813791 systemd-logind[1154]: New session 3 of user core. Feb 13 05:11:58.814292 systemd[1]: Started session-3.scope. Feb 13 05:11:58.865759 systemd[1]: Started sshd@1-147.75.90.7:22-139.178.68.195:47068.service. Feb 13 05:11:58.894438 sshd[1245]: Accepted publickey for core from 139.178.68.195 port 47068 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:11:58.895145 sshd[1245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:11:58.897582 systemd-logind[1154]: New session 4 of user core. Feb 13 05:11:58.897980 systemd[1]: Started session-4.scope. Feb 13 05:11:58.949053 sshd[1245]: pam_unix(sshd:session): session closed for user core Feb 13 05:11:58.951562 systemd[1]: sshd@1-147.75.90.7:22-139.178.68.195:47068.service: Deactivated successfully. Feb 13 05:11:58.952196 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 05:11:58.952903 systemd-logind[1154]: Session 4 logged out. Waiting for processes to exit. Feb 13 05:11:58.953873 systemd[1]: Started sshd@2-147.75.90.7:22-139.178.68.195:47072.service. Feb 13 05:11:58.954695 systemd-logind[1154]: Removed session 4. Feb 13 05:11:58.999643 sshd[1251]: Accepted publickey for core from 139.178.68.195 port 47072 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:11:59.001817 sshd[1251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:11:59.009406 systemd-logind[1154]: New session 5 of user core. Feb 13 05:11:59.011028 systemd[1]: Started session-5.scope. Feb 13 05:11:59.079509 sshd[1251]: pam_unix(sshd:session): session closed for user core Feb 13 05:11:59.086110 systemd[1]: sshd@2-147.75.90.7:22-139.178.68.195:47072.service: Deactivated successfully. Feb 13 05:11:59.087658 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 05:11:59.089389 systemd-logind[1154]: Session 5 logged out. Waiting for processes to exit. Feb 13 05:11:59.091823 systemd[1]: Started sshd@3-147.75.90.7:22-139.178.68.195:47074.service. Feb 13 05:11:59.094195 systemd-logind[1154]: Removed session 5. Feb 13 05:11:59.124990 sshd[1257]: Accepted publickey for core from 139.178.68.195 port 47074 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:11:59.125735 sshd[1257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:11:59.128596 systemd-logind[1154]: New session 6 of user core. Feb 13 05:11:59.129151 systemd[1]: Started session-6.scope. Feb 13 05:11:59.193824 sshd[1257]: pam_unix(sshd:session): session closed for user core Feb 13 05:11:59.200199 systemd[1]: sshd@3-147.75.90.7:22-139.178.68.195:47074.service: Deactivated successfully. Feb 13 05:11:59.201727 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 05:11:59.203428 systemd-logind[1154]: Session 6 logged out. Waiting for processes to exit. Feb 13 05:11:59.206015 systemd[1]: Started sshd@4-147.75.90.7:22-139.178.68.195:47088.service. Feb 13 05:11:59.208389 systemd-logind[1154]: Removed session 6. Feb 13 05:11:59.261800 sshd[1263]: Accepted publickey for core from 139.178.68.195 port 47088 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:11:59.263401 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:11:59.268934 systemd-logind[1154]: New session 7 of user core. Feb 13 05:11:59.270091 systemd[1]: Started session-7.scope. Feb 13 05:11:59.363530 sudo[1267]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 05:11:59.364163 sudo[1267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 05:11:59.382771 dbus-daemon[1123]: \xd0-\xce(\u001dV: received setenforce notice (enforcing=-153223216) Feb 13 05:11:59.387661 sudo[1267]: pam_unix(sudo:session): session closed for user root Feb 13 05:11:59.392642 sshd[1263]: pam_unix(sshd:session): session closed for user core Feb 13 05:11:59.399519 systemd[1]: sshd@4-147.75.90.7:22-139.178.68.195:47088.service: Deactivated successfully. Feb 13 05:11:59.401178 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 05:11:59.402963 systemd-logind[1154]: Session 7 logged out. Waiting for processes to exit. Feb 13 05:11:59.405590 systemd[1]: Started sshd@5-147.75.90.7:22-139.178.68.195:47092.service. Feb 13 05:11:59.407961 systemd-logind[1154]: Removed session 7. Feb 13 05:11:59.439156 sshd[1271]: Accepted publickey for core from 139.178.68.195 port 47092 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:11:59.439993 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:11:59.443024 systemd-logind[1154]: New session 8 of user core. Feb 13 05:11:59.443607 systemd[1]: Started session-8.scope. Feb 13 05:11:59.506110 sudo[1275]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 05:11:59.506711 sudo[1275]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 05:11:59.513656 sudo[1275]: pam_unix(sudo:session): session closed for user root Feb 13 05:11:59.525980 sudo[1274]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 05:11:59.526587 sudo[1274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 05:11:59.550726 systemd[1]: Stopping audit-rules.service... Feb 13 05:11:59.552000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 13 05:11:59.553850 auditctl[1278]: No rules Feb 13 05:11:59.554842 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 05:11:59.555363 systemd[1]: Stopped audit-rules.service. Feb 13 05:11:59.559307 kernel: kauditd_printk_skb: 95 callbacks suppressed Feb 13 05:11:59.559495 kernel: audit: type=1305 audit(1707801119.552:127): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 13 05:11:59.560075 systemd[1]: Starting audit-rules.service... Feb 13 05:11:59.552000 audit[1278]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe14a7e9e0 a2=420 a3=0 items=0 ppid=1 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:11:59.590080 augenrules[1295]: No rules Feb 13 05:11:59.590636 systemd[1]: Finished audit-rules.service. Feb 13 05:11:59.591351 sudo[1274]: pam_unix(sudo:session): session closed for user root Feb 13 05:11:59.592506 sshd[1271]: pam_unix(sshd:session): session closed for user core Feb 13 05:11:59.595321 systemd[1]: Started sshd@6-147.75.90.7:22-139.178.68.195:47104.service. Feb 13 05:11:59.595752 systemd[1]: sshd@5-147.75.90.7:22-139.178.68.195:47092.service: Deactivated successfully. Feb 13 05:11:59.596228 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 05:11:59.596780 systemd-logind[1154]: Session 8 logged out. Waiting for processes to exit. Feb 13 05:11:59.597624 systemd-logind[1154]: Removed session 8. Feb 13 05:11:59.606373 kernel: audit: type=1300 audit(1707801119.552:127): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe14a7e9e0 a2=420 a3=0 items=0 ppid=1 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:11:59.606415 kernel: audit: type=1327 audit(1707801119.552:127): proctitle=2F7362696E2F617564697463746C002D44 Feb 13 05:11:59.552000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 13 05:11:59.615899 kernel: audit: type=1131 audit(1707801119.554:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.638345 kernel: audit: type=1130 audit(1707801119.589:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.639642 sshd[1300]: Accepted publickey for core from 139.178.68.195 port 47104 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 05:11:59.640597 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 05:11:59.642621 systemd-logind[1154]: New session 9 of user core. Feb 13 05:11:59.642986 systemd[1]: Started session-9.scope. Feb 13 05:11:59.660787 kernel: audit: type=1106 audit(1707801119.590:130): pid=1274 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.590000 audit[1274]: USER_END pid=1274 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.590000 audit[1274]: CRED_DISP pid=1274 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.689782 sudo[1304]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 05:11:59.689893 sudo[1304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 05:11:59.710324 kernel: audit: type=1104 audit(1707801119.590:131): pid=1274 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.710377 kernel: audit: type=1106 audit(1707801119.592:132): pid=1271 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:11:59.592000 audit[1271]: USER_END pid=1271 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:11:59.742503 kernel: audit: type=1104 audit(1707801119.592:133): pid=1271 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:11:59.592000 audit[1271]: CRED_DISP pid=1271 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:11:59.768494 kernel: audit: type=1130 audit(1707801119.594:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-147.75.90.7:22-139.178.68.195:47104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-147.75.90.7:22-139.178.68.195:47104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-147.75.90.7:22-139.178.68.195:47092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.638000 audit[1300]: USER_ACCT pid=1300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:11:59.639000 audit[1300]: CRED_ACQ pid=1300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:11:59.639000 audit[1300]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0cc69d70 a2=3 a3=0 items=0 ppid=1 pid=1300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:11:59.639000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 13 05:11:59.643000 audit[1300]: USER_START pid=1300 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:11:59.644000 audit[1303]: CRED_ACQ pid=1303 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:11:59.688000 audit[1304]: USER_ACCT pid=1304 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.688000 audit[1304]: CRED_REFR pid=1304 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 05:11:59.689000 audit[1304]: USER_START pid=1304 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 05:12:03.700039 systemd[1]: Reloading. Feb 13 05:12:03.734075 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-02-13T05:12:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 05:12:03.734097 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-02-13T05:12:03Z" level=info msg="torcx already run" Feb 13 05:12:03.799061 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 05:12:03.799074 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 05:12:03.814442 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit: BPF prog-id=29 op=LOAD Feb 13 05:12:03.859000 audit: BPF prog-id=24 op=UNLOAD Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit: BPF prog-id=30 op=LOAD Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.859000 audit: BPF prog-id=31 op=LOAD Feb 13 05:12:03.860000 audit: BPF prog-id=25 op=UNLOAD Feb 13 05:12:03.860000 audit: BPF prog-id=26 op=UNLOAD Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit: BPF prog-id=32 op=LOAD Feb 13 05:12:03.860000 audit: BPF prog-id=13 op=UNLOAD Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit: BPF prog-id=33 op=LOAD Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.860000 audit: BPF prog-id=34 op=LOAD Feb 13 05:12:03.860000 audit: BPF prog-id=14 op=UNLOAD Feb 13 05:12:03.860000 audit: BPF prog-id=15 op=UNLOAD Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit: BPF prog-id=35 op=LOAD Feb 13 05:12:03.861000 audit: BPF prog-id=19 op=UNLOAD Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit: BPF prog-id=36 op=LOAD Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.861000 audit: BPF prog-id=37 op=LOAD Feb 13 05:12:03.861000 audit: BPF prog-id=20 op=UNLOAD Feb 13 05:12:03.861000 audit: BPF prog-id=21 op=UNLOAD Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit: BPF prog-id=38 op=LOAD Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.862000 audit: BPF prog-id=39 op=LOAD Feb 13 05:12:03.862000 audit: BPF prog-id=16 op=UNLOAD Feb 13 05:12:03.862000 audit: BPF prog-id=17 op=UNLOAD Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit: BPF prog-id=40 op=LOAD Feb 13 05:12:03.863000 audit: BPF prog-id=18 op=UNLOAD Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit: BPF prog-id=41 op=LOAD Feb 13 05:12:03.864000 audit: BPF prog-id=27 op=UNLOAD Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit: BPF prog-id=42 op=LOAD Feb 13 05:12:03.864000 audit: BPF prog-id=22 op=UNLOAD Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.865000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:03.865000 audit: BPF prog-id=43 op=LOAD Feb 13 05:12:03.865000 audit: BPF prog-id=23 op=UNLOAD Feb 13 05:12:03.869929 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 05:12:03.873639 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 05:12:03.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:03.873896 systemd[1]: Reached target network-online.target. Feb 13 05:12:03.874559 systemd[1]: Started kubelet.service. Feb 13 05:12:03.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:04.493839 kubelet[1390]: E0213 05:12:04.493683 1390 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 13 05:12:04.500679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 05:12:04.501081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 05:12:04.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 13 05:12:04.810800 systemd[1]: Stopped kubelet.service. Feb 13 05:12:04.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:04.817109 kernel: kauditd_printk_skb: 186 callbacks suppressed Feb 13 05:12:04.817286 kernel: audit: type=1130 audit(1707801124.809:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:04.828047 systemd[1]: Reloading. Feb 13 05:12:04.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:04.858804 /usr/lib/systemd/system-generators/torcx-generator[1495]: time="2024-02-13T05:12:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 05:12:04.858820 /usr/lib/systemd/system-generators/torcx-generator[1495]: time="2024-02-13T05:12:04Z" level=info msg="torcx already run" Feb 13 05:12:04.878084 kernel: audit: type=1131 audit(1707801124.809:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:04.906171 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 05:12:04.906178 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 05:12:04.916944 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 05:12:04.961000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:04.961000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.072546 kernel: audit: type=1400 audit(1707801124.961:321): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.072591 kernel: audit: type=1400 audit(1707801124.961:322): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.072604 kernel: audit: type=1400 audit(1707801124.961:323): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:04.961000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.128844 kernel: audit: type=1400 audit(1707801124.961:324): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:04.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.187104 kernel: audit: type=1400 audit(1707801124.961:325): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:04.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.247055 kernel: audit: type=1400 audit(1707801124.961:326): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:04.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.308439 kernel: audit: type=1400 audit(1707801124.961:327): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:04.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.371020 kernel: audit: type=1400 audit(1707801124.961:328): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:04.961000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:04.961000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.127000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.127000 audit: BPF prog-id=44 op=LOAD Feb 13 05:12:05.127000 audit: BPF prog-id=29 op=UNLOAD Feb 13 05:12:05.127000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.127000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.127000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.127000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.127000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.127000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.127000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.127000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.245000 audit: BPF prog-id=45 op=LOAD Feb 13 05:12:05.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.245000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.245000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.432000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.432000 audit: BPF prog-id=46 op=LOAD Feb 13 05:12:05.432000 audit: BPF prog-id=30 op=UNLOAD Feb 13 05:12:05.432000 audit: BPF prog-id=31 op=UNLOAD Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit: BPF prog-id=47 op=LOAD Feb 13 05:12:05.433000 audit: BPF prog-id=32 op=UNLOAD Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit: BPF prog-id=48 op=LOAD Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.433000 audit: BPF prog-id=49 op=LOAD Feb 13 05:12:05.433000 audit: BPF prog-id=33 op=UNLOAD Feb 13 05:12:05.433000 audit: BPF prog-id=34 op=UNLOAD Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit: BPF prog-id=50 op=LOAD Feb 13 05:12:05.434000 audit: BPF prog-id=35 op=UNLOAD Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit: BPF prog-id=51 op=LOAD Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.434000 audit: BPF prog-id=52 op=LOAD Feb 13 05:12:05.434000 audit: BPF prog-id=36 op=UNLOAD Feb 13 05:12:05.434000 audit: BPF prog-id=37 op=UNLOAD Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit: BPF prog-id=53 op=LOAD Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.435000 audit: BPF prog-id=54 op=LOAD Feb 13 05:12:05.435000 audit: BPF prog-id=38 op=UNLOAD Feb 13 05:12:05.435000 audit: BPF prog-id=39 op=UNLOAD Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.436000 audit: BPF prog-id=55 op=LOAD Feb 13 05:12:05.436000 audit: BPF prog-id=40 op=UNLOAD Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit: BPF prog-id=56 op=LOAD Feb 13 05:12:05.437000 audit: BPF prog-id=41 op=UNLOAD Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit: BPF prog-id=57 op=LOAD Feb 13 05:12:05.437000 audit: BPF prog-id=42 op=UNLOAD Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.437000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.438000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.438000 audit: BPF prog-id=58 op=LOAD Feb 13 05:12:05.438000 audit: BPF prog-id=43 op=UNLOAD Feb 13 05:12:05.446200 systemd[1]: Started kubelet.service. Feb 13 05:12:05.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:05.481245 kubelet[1549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 05:12:05.481245 kubelet[1549]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 05:12:05.481245 kubelet[1549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 05:12:05.481472 kubelet[1549]: I0213 05:12:05.481256 1549 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 05:12:05.682625 kubelet[1549]: I0213 05:12:05.682504 1549 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 13 05:12:05.682625 kubelet[1549]: I0213 05:12:05.682534 1549 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 05:12:05.682925 kubelet[1549]: I0213 05:12:05.682875 1549 server.go:837] "Client rotation is on, will bootstrap in background" Feb 13 05:12:05.694721 kubelet[1549]: I0213 05:12:05.694664 1549 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 05:12:05.716227 kubelet[1549]: I0213 05:12:05.716216 1549 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 05:12:05.716405 kubelet[1549]: I0213 05:12:05.716339 1549 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 05:12:05.716405 kubelet[1549]: I0213 05:12:05.716381 1549 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 05:12:05.716405 kubelet[1549]: I0213 05:12:05.716393 1549 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 05:12:05.716405 kubelet[1549]: I0213 05:12:05.716400 1549 container_manager_linux.go:302] "Creating device plugin manager" Feb 13 05:12:05.716822 kubelet[1549]: I0213 05:12:05.716787 1549 state_mem.go:36] "Initialized new in-memory state store" Feb 13 05:12:05.722386 kubelet[1549]: I0213 05:12:05.722362 1549 kubelet.go:405] "Attempting to sync node with API server" Feb 13 05:12:05.722386 kubelet[1549]: I0213 05:12:05.722375 1549 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 05:12:05.722460 kubelet[1549]: I0213 05:12:05.722391 1549 kubelet.go:309] "Adding apiserver pod source" Feb 13 05:12:05.722460 kubelet[1549]: I0213 05:12:05.722401 1549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 05:12:05.722460 kubelet[1549]: E0213 05:12:05.722455 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:05.722531 kubelet[1549]: E0213 05:12:05.722483 1549 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:05.724873 kubelet[1549]: I0213 05:12:05.724831 1549 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 05:12:05.726325 kubelet[1549]: W0213 05:12:05.726268 1549 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 05:12:05.727633 kubelet[1549]: I0213 05:12:05.727595 1549 server.go:1168] "Started kubelet" Feb 13 05:12:05.727682 kubelet[1549]: I0213 05:12:05.727645 1549 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 05:12:05.727796 kubelet[1549]: I0213 05:12:05.727756 1549 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 05:12:05.727000 audit[1549]: AVC avc: denied { mac_admin } for pid=1549 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.727000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 13 05:12:05.727000 audit[1549]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b2f680 a1=c000328a80 a2=c000b2f650 a3=25 items=0 ppid=1 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.727000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 13 05:12:05.727000 audit[1549]: AVC avc: denied { mac_admin } for pid=1549 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.727000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 13 05:12:05.727000 audit[1549]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0005ba700 a1=c000328a98 a2=c000b2f710 a3=25 items=0 ppid=1 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.727000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 13 05:12:05.728930 kubelet[1549]: I0213 05:12:05.728567 1549 kubelet.go:1355] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 13 05:12:05.728930 kubelet[1549]: I0213 05:12:05.728590 1549 kubelet.go:1359] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 13 05:12:05.728930 kubelet[1549]: I0213 05:12:05.728626 1549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 05:12:05.728930 kubelet[1549]: I0213 05:12:05.728688 1549 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 13 05:12:05.728930 kubelet[1549]: I0213 05:12:05.728734 1549 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 13 05:12:05.729181 kubelet[1549]: E0213 05:12:05.729166 1549 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 05:12:05.729225 kubelet[1549]: E0213 05:12:05.729188 1549 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 05:12:05.730097 kubelet[1549]: I0213 05:12:05.730059 1549 server.go:461] "Adding debug handlers to kubelet server" Feb 13 05:12:05.731269 kubelet[1549]: W0213 05:12:05.731252 1549 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 05:12:05.731269 kubelet[1549]: W0213 05:12:05.731250 1549 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 05:12:05.731269 kubelet[1549]: W0213 05:12:05.731260 1549 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 05:12:05.731469 kubelet[1549]: E0213 05:12:05.731269 1549 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.80.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 05:12:05.731469 kubelet[1549]: E0213 05:12:05.731290 1549 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 05:12:05.731469 kubelet[1549]: E0213 05:12:05.731289 1549 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 05:12:05.731469 kubelet[1549]: E0213 05:12:05.731296 1549 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 05:12:05.731573 kubelet[1549]: E0213 05:12:05.731273 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b3541356013b6c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 727583084, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 727583084, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:05.732088 kubelet[1549]: E0213 05:12:05.731998 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b3541356199399", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 729178521, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 729178521, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:05.745697 kubelet[1549]: I0213 05:12:05.745645 1549 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 05:12:05.745697 kubelet[1549]: I0213 05:12:05.745658 1549 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 05:12:05.745697 kubelet[1549]: I0213 05:12:05.745669 1549 state_mem.go:36] "Initialized new in-memory state store" Feb 13 05:12:05.746630 kubelet[1549]: I0213 05:12:05.746599 1549 policy_none.go:49] "None policy: Start" Feb 13 05:12:05.746744 kubelet[1549]: E0213 05:12:05.746671 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570ef02d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745258541, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745258541, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:05.746988 kubelet[1549]: I0213 05:12:05.746924 1549 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 05:12:05.746988 kubelet[1549]: I0213 05:12:05.746959 1549 state_mem.go:35] "Initializing new in-memory state store" Feb 13 05:12:05.748217 kubelet[1549]: E0213 05:12:05.748155 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570f0947", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745264967, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745264967, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:05.749661 kubelet[1549]: E0213 05:12:05.749601 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570f1291", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745267345, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745267345, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:05.750080 systemd[1]: Created slice kubepods.slice. Feb 13 05:12:05.752135 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 05:12:05.751000 audit[1577]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:05.751000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe9a885a90 a2=0 a3=7ffe9a885a7c items=0 ppid=1549 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 13 05:12:05.753759 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 05:12:05.752000 audit[1580]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:05.752000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fffc1ac7ac0 a2=0 a3=7fffc1ac7aac items=0 ppid=1549 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 13 05:12:05.775930 kubelet[1549]: I0213 05:12:05.775919 1549 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 05:12:05.774000 audit[1549]: AVC avc: denied { mac_admin } for pid=1549 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:05.774000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 13 05:12:05.774000 audit[1549]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009f1fb0 a1=c00129e948 a2=c0009f1f80 a3=25 items=0 ppid=1 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.774000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 13 05:12:05.776088 kubelet[1549]: I0213 05:12:05.775951 1549 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 13 05:12:05.776088 kubelet[1549]: I0213 05:12:05.776061 1549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 05:12:05.776382 kubelet[1549]: E0213 05:12:05.776342 1549 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.13\" not found" Feb 13 05:12:05.779524 kubelet[1549]: E0213 05:12:05.779459 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b3541358fce869", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 777631337, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 777631337, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:05.753000 audit[1582]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:05.753000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd64b6e070 a2=0 a3=7ffd64b6e05c items=0 ppid=1549 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 13 05:12:05.802000 audit[1587]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:05.802000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeaeae9dd0 a2=0 a3=7ffeaeae9dbc items=0 ppid=1549 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 13 05:12:05.830549 kubelet[1549]: I0213 05:12:05.830471 1549 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.13" Feb 13 05:12:05.832769 kubelet[1549]: E0213 05:12:05.832697 1549 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.13" Feb 13 05:12:05.833454 kubelet[1549]: E0213 05:12:05.833274 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570ef02d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745258541, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 830318792, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.13.17b35413570ef02d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:05.835549 kubelet[1549]: E0213 05:12:05.835383 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570f0947", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745264967, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 830364053, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.13.17b35413570f0947" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:05.837545 kubelet[1549]: E0213 05:12:05.837386 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570f1291", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745267345, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 830374378, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.13.17b35413570f1291" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:05.879000 audit[1592]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:05.879000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdda04dab0 a2=0 a3=7ffdda04da9c items=0 ppid=1549 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 13 05:12:05.881575 kubelet[1549]: I0213 05:12:05.881535 1549 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 05:12:05.881000 audit[1593]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=1593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:05.881000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd78dbbf20 a2=0 a3=7ffd78dbbf0c items=0 ppid=1549 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 13 05:12:05.881000 audit[1594]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:05.881000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff02235910 a2=0 a3=7fff022358fc items=0 ppid=1549 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 13 05:12:05.882883 kubelet[1549]: I0213 05:12:05.882658 1549 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 05:12:05.882883 kubelet[1549]: I0213 05:12:05.882697 1549 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 13 05:12:05.882883 kubelet[1549]: I0213 05:12:05.882736 1549 kubelet.go:2257] "Starting kubelet main sync loop" Feb 13 05:12:05.882883 kubelet[1549]: E0213 05:12:05.882810 1549 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 05:12:05.882000 audit[1595]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=1595 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:05.882000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8f8b3a60 a2=0 a3=7ffe8f8b3a4c items=0 ppid=1549 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 13 05:12:05.882000 audit[1596]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:05.882000 audit[1596]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff9da08880 a2=0 a3=7fff9da0886c items=0 ppid=1549 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 13 05:12:05.883000 audit[1597]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=1597 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:05.883000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff65a17140 a2=0 a3=7fff65a1712c items=0 ppid=1549 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 13 05:12:05.883000 audit[1598]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:05.883000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb21e40a0 a2=0 a3=7fffb21e408c items=0 ppid=1549 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 13 05:12:05.885241 kubelet[1549]: W0213 05:12:05.885165 1549 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 13 05:12:05.885241 kubelet[1549]: E0213 05:12:05.885195 1549 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 13 05:12:05.884000 audit[1599]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:05.884000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffa983c2f0 a2=0 a3=7fffa983c2dc items=0 ppid=1549 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:05.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 13 05:12:05.934415 kubelet[1549]: E0213 05:12:05.934190 1549 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.80.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 13 05:12:06.034636 kubelet[1549]: I0213 05:12:06.034569 1549 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.13" Feb 13 05:12:06.037206 kubelet[1549]: E0213 05:12:06.037160 1549 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.13" Feb 13 05:12:06.037454 kubelet[1549]: E0213 05:12:06.037171 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570ef02d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745258541, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 6, 34477205, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.13.17b35413570ef02d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:06.039610 kubelet[1549]: E0213 05:12:06.039416 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570f0947", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745264967, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 6, 34495087, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.13.17b35413570f0947" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:06.041687 kubelet[1549]: E0213 05:12:06.041498 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570f1291", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745267345, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 6, 34501857, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.13.17b35413570f1291" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:06.336910 kubelet[1549]: E0213 05:12:06.336734 1549 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.80.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 13 05:12:06.438822 kubelet[1549]: I0213 05:12:06.438751 1549 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.13" Feb 13 05:12:06.441327 kubelet[1549]: E0213 05:12:06.441271 1549 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.13" Feb 13 05:12:06.441600 kubelet[1549]: E0213 05:12:06.441222 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570ef02d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.13 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745258541, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 6, 438649179, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.13.17b35413570ef02d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:06.443753 kubelet[1549]: E0213 05:12:06.443600 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570f0947", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.13 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745264967, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 6, 438671605, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.13.17b35413570f0947" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:06.446525 kubelet[1549]: E0213 05:12:06.446319 1549 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.13.17b35413570f1291", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.13", UID:"10.67.80.13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.13 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.13"}, FirstTimestamp:time.Date(2024, time.February, 13, 5, 12, 5, 745267345, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 5, 12, 6, 438682086, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.13.17b35413570f1291" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 13 05:12:06.685859 kubelet[1549]: I0213 05:12:06.685648 1549 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 05:12:06.723978 kubelet[1549]: E0213 05:12:06.723912 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:07.088052 kubelet[1549]: E0213 05:12:07.087874 1549 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.80.13" not found Feb 13 05:12:07.146848 kubelet[1549]: E0213 05:12:07.146786 1549 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.13\" not found" node="10.67.80.13" Feb 13 05:12:07.243505 kubelet[1549]: I0213 05:12:07.243448 1549 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.13" Feb 13 05:12:07.252320 kubelet[1549]: I0213 05:12:07.252272 1549 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.13" Feb 13 05:12:07.264519 kubelet[1549]: E0213 05:12:07.264468 1549 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.13\" not found" Feb 13 05:12:07.366696 kubelet[1549]: I0213 05:12:07.366526 1549 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 05:12:07.367406 env[1164]: time="2024-02-13T05:12:07.367295456Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 05:12:07.368200 kubelet[1549]: I0213 05:12:07.367775 1549 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 05:12:07.475669 sudo[1304]: pam_unix(sudo:session): session closed for user root Feb 13 05:12:07.474000 audit[1304]: USER_END pid=1304 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 05:12:07.474000 audit[1304]: CRED_DISP pid=1304 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 13 05:12:07.478634 sshd[1300]: pam_unix(sshd:session): session closed for user core Feb 13 05:12:07.479000 audit[1300]: USER_END pid=1300 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:12:07.480000 audit[1300]: CRED_DISP pid=1300 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Feb 13 05:12:07.484426 systemd[1]: sshd@6-147.75.90.7:22-139.178.68.195:47104.service: Deactivated successfully. Feb 13 05:12:07.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-147.75.90.7:22-139.178.68.195:47104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:12:07.486186 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 05:12:07.487960 systemd-logind[1154]: Session 9 logged out. Waiting for processes to exit. Feb 13 05:12:07.490276 systemd-logind[1154]: Removed session 9. Feb 13 05:12:07.725061 kubelet[1549]: I0213 05:12:07.724835 1549 apiserver.go:52] "Watching apiserver" Feb 13 05:12:07.725061 kubelet[1549]: E0213 05:12:07.724919 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:07.728847 kubelet[1549]: I0213 05:12:07.728762 1549 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:12:07.729032 kubelet[1549]: I0213 05:12:07.728997 1549 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:12:07.729216 kubelet[1549]: I0213 05:12:07.729112 1549 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:12:07.729924 kubelet[1549]: E0213 05:12:07.729757 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:07.737231 systemd[1]: Created slice kubepods-besteffort-pod2593c255_9bd9_4c8a_a0c8_d2724bc0d7f6.slice. Feb 13 05:12:07.737728 kubelet[1549]: I0213 05:12:07.737719 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-node-certs\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.737781 kubelet[1549]: I0213 05:12:07.737740 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-var-run-calico\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.737781 kubelet[1549]: I0213 05:12:07.737753 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-cni-bin-dir\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.737817 kubelet[1549]: I0213 05:12:07.737806 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-cni-net-dir\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.737840 kubelet[1549]: I0213 05:12:07.737830 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-cni-log-dir\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.737894 kubelet[1549]: I0213 05:12:07.737862 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fld8n\" (UniqueName: \"kubernetes.io/projected/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-kube-api-access-fld8n\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.737950 kubelet[1549]: I0213 05:12:07.737943 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-xtables-lock\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.737972 kubelet[1549]: I0213 05:12:07.737966 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-policysync\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.737990 kubelet[1549]: I0213 05:12:07.737981 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-tigera-ca-bundle\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.738022 kubelet[1549]: I0213 05:12:07.737992 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-var-lib-calico\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.738060 kubelet[1549]: I0213 05:12:07.738025 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-flexvol-driver-host\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.738060 kubelet[1549]: I0213 05:12:07.738056 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6-lib-modules\") pod \"calico-node-zsrwd\" (UID: \"2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6\") " pod="calico-system/calico-node-zsrwd" Feb 13 05:12:07.748208 systemd[1]: Created slice kubepods-besteffort-podafbf8dc2_46dd_4b6b_bbe0_73589e0ed4a7.slice. Feb 13 05:12:07.830206 kubelet[1549]: I0213 05:12:07.830135 1549 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 13 05:12:07.838552 kubelet[1549]: I0213 05:12:07.838491 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae936456-294f-41c6-9471-c93c49d5b396-socket-dir\") pod \"csi-node-driver-2wz5l\" (UID: \"ae936456-294f-41c6-9471-c93c49d5b396\") " pod="calico-system/csi-node-driver-2wz5l" Feb 13 05:12:07.838827 kubelet[1549]: I0213 05:12:07.838642 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zkd8\" (UniqueName: \"kubernetes.io/projected/ae936456-294f-41c6-9471-c93c49d5b396-kube-api-access-9zkd8\") pod \"csi-node-driver-2wz5l\" (UID: \"ae936456-294f-41c6-9471-c93c49d5b396\") " pod="calico-system/csi-node-driver-2wz5l" Feb 13 05:12:07.838827 kubelet[1549]: I0213 05:12:07.838731 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7-xtables-lock\") pod \"kube-proxy-qgl8b\" (UID: \"afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7\") " pod="kube-system/kube-proxy-qgl8b" Feb 13 05:12:07.839299 kubelet[1549]: I0213 05:12:07.839247 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae936456-294f-41c6-9471-c93c49d5b396-varrun\") pod \"csi-node-driver-2wz5l\" (UID: \"ae936456-294f-41c6-9471-c93c49d5b396\") " pod="calico-system/csi-node-driver-2wz5l" Feb 13 05:12:07.839562 kubelet[1549]: I0213 05:12:07.839396 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae936456-294f-41c6-9471-c93c49d5b396-kubelet-dir\") pod \"csi-node-driver-2wz5l\" (UID: \"ae936456-294f-41c6-9471-c93c49d5b396\") " pod="calico-system/csi-node-driver-2wz5l" Feb 13 05:12:07.839562 kubelet[1549]: I0213 05:12:07.839526 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae936456-294f-41c6-9471-c93c49d5b396-registration-dir\") pod \"csi-node-driver-2wz5l\" (UID: \"ae936456-294f-41c6-9471-c93c49d5b396\") " pod="calico-system/csi-node-driver-2wz5l" Feb 13 05:12:07.839838 kubelet[1549]: I0213 05:12:07.839619 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7-lib-modules\") pod \"kube-proxy-qgl8b\" (UID: \"afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7\") " pod="kube-system/kube-proxy-qgl8b" Feb 13 05:12:07.839838 kubelet[1549]: I0213 05:12:07.839713 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hks67\" (UniqueName: \"kubernetes.io/projected/afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7-kube-api-access-hks67\") pod \"kube-proxy-qgl8b\" (UID: \"afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7\") " pod="kube-system/kube-proxy-qgl8b" Feb 13 05:12:07.840514 kubelet[1549]: E0213 05:12:07.840430 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.840514 kubelet[1549]: W0213 05:12:07.840473 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.840937 kubelet[1549]: E0213 05:12:07.840554 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.841365 kubelet[1549]: E0213 05:12:07.841293 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.841365 kubelet[1549]: W0213 05:12:07.841329 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.841582 kubelet[1549]: E0213 05:12:07.841414 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.841944 kubelet[1549]: E0213 05:12:07.841868 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.841944 kubelet[1549]: W0213 05:12:07.841893 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.841944 kubelet[1549]: E0213 05:12:07.841938 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.842527 kubelet[1549]: E0213 05:12:07.842448 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.842527 kubelet[1549]: W0213 05:12:07.842480 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.842527 kubelet[1549]: E0213 05:12:07.842530 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.843198 kubelet[1549]: E0213 05:12:07.843116 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.843198 kubelet[1549]: W0213 05:12:07.843151 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.843198 kubelet[1549]: E0213 05:12:07.843202 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.843849 kubelet[1549]: E0213 05:12:07.843767 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.843849 kubelet[1549]: W0213 05:12:07.843802 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.844133 kubelet[1549]: E0213 05:12:07.843914 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.844446 kubelet[1549]: E0213 05:12:07.844377 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.844446 kubelet[1549]: W0213 05:12:07.844405 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.844760 kubelet[1549]: E0213 05:12:07.844499 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.844973 kubelet[1549]: E0213 05:12:07.844908 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.844973 kubelet[1549]: W0213 05:12:07.844942 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.845187 kubelet[1549]: E0213 05:12:07.845044 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.845187 kubelet[1549]: I0213 05:12:07.845165 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7-kube-proxy\") pod \"kube-proxy-qgl8b\" (UID: \"afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7\") " pod="kube-system/kube-proxy-qgl8b" Feb 13 05:12:07.845593 kubelet[1549]: E0213 05:12:07.845515 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.845593 kubelet[1549]: W0213 05:12:07.845543 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.845889 kubelet[1549]: E0213 05:12:07.845672 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.846105 kubelet[1549]: E0213 05:12:07.846043 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.846105 kubelet[1549]: W0213 05:12:07.846077 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.846371 kubelet[1549]: E0213 05:12:07.846170 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.846607 kubelet[1549]: E0213 05:12:07.846601 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.846642 kubelet[1549]: W0213 05:12:07.846607 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.846642 kubelet[1549]: E0213 05:12:07.846634 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.846792 kubelet[1549]: E0213 05:12:07.846762 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.846792 kubelet[1549]: W0213 05:12:07.846768 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.846792 kubelet[1549]: E0213 05:12:07.846777 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.846900 kubelet[1549]: E0213 05:12:07.846895 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.846933 kubelet[1549]: W0213 05:12:07.846920 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.846933 kubelet[1549]: E0213 05:12:07.846928 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.847094 kubelet[1549]: E0213 05:12:07.847089 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.847129 kubelet[1549]: W0213 05:12:07.847094 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.847129 kubelet[1549]: E0213 05:12:07.847117 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.847229 kubelet[1549]: E0213 05:12:07.847225 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.847250 kubelet[1549]: W0213 05:12:07.847229 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.847250 kubelet[1549]: E0213 05:12:07.847236 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.847318 kubelet[1549]: E0213 05:12:07.847314 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.847339 kubelet[1549]: W0213 05:12:07.847318 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.847339 kubelet[1549]: E0213 05:12:07.847325 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.847478 kubelet[1549]: E0213 05:12:07.847473 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.847513 kubelet[1549]: W0213 05:12:07.847479 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.847513 kubelet[1549]: E0213 05:12:07.847488 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.847641 kubelet[1549]: E0213 05:12:07.847612 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.847641 kubelet[1549]: W0213 05:12:07.847617 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.847641 kubelet[1549]: E0213 05:12:07.847623 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.847777 kubelet[1549]: E0213 05:12:07.847727 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.847777 kubelet[1549]: W0213 05:12:07.847747 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.847777 kubelet[1549]: E0213 05:12:07.847753 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.847777 kubelet[1549]: I0213 05:12:07.847766 1549 reconciler.go:41] "Reconciler: start to sync state" Feb 13 05:12:07.847936 kubelet[1549]: E0213 05:12:07.847930 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.847984 kubelet[1549]: W0213 05:12:07.847937 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.847984 kubelet[1549]: E0213 05:12:07.847949 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.848075 kubelet[1549]: E0213 05:12:07.848069 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.848093 kubelet[1549]: W0213 05:12:07.848075 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.848093 kubelet[1549]: E0213 05:12:07.848086 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.848175 kubelet[1549]: E0213 05:12:07.848170 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.848197 kubelet[1549]: W0213 05:12:07.848176 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.848197 kubelet[1549]: E0213 05:12:07.848187 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.848323 kubelet[1549]: E0213 05:12:07.848317 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.848348 kubelet[1549]: W0213 05:12:07.848324 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.848348 kubelet[1549]: E0213 05:12:07.848338 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.848437 kubelet[1549]: E0213 05:12:07.848433 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.848459 kubelet[1549]: W0213 05:12:07.848437 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.848459 kubelet[1549]: E0213 05:12:07.848444 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.848515 kubelet[1549]: E0213 05:12:07.848510 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.848532 kubelet[1549]: W0213 05:12:07.848514 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.848532 kubelet[1549]: E0213 05:12:07.848521 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.848616 kubelet[1549]: E0213 05:12:07.848612 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.848638 kubelet[1549]: W0213 05:12:07.848616 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.848638 kubelet[1549]: E0213 05:12:07.848621 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.848726 kubelet[1549]: E0213 05:12:07.848721 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.848743 kubelet[1549]: W0213 05:12:07.848726 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.848743 kubelet[1549]: E0213 05:12:07.848731 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.849107 kubelet[1549]: E0213 05:12:07.849102 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.849107 kubelet[1549]: W0213 05:12:07.849107 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.849155 kubelet[1549]: E0213 05:12:07.849113 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.852782 kubelet[1549]: E0213 05:12:07.852746 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.852782 kubelet[1549]: W0213 05:12:07.852752 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.852782 kubelet[1549]: E0213 05:12:07.852760 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.949563 kubelet[1549]: E0213 05:12:07.949470 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.949563 kubelet[1549]: W0213 05:12:07.949512 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.949563 kubelet[1549]: E0213 05:12:07.949568 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.950218 kubelet[1549]: E0213 05:12:07.950144 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.950218 kubelet[1549]: W0213 05:12:07.950178 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.950218 kubelet[1549]: E0213 05:12:07.950227 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.950899 kubelet[1549]: E0213 05:12:07.950825 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.950899 kubelet[1549]: W0213 05:12:07.950860 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.950899 kubelet[1549]: E0213 05:12:07.950909 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.951476 kubelet[1549]: E0213 05:12:07.951398 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.951476 kubelet[1549]: W0213 05:12:07.951425 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.951476 kubelet[1549]: E0213 05:12:07.951479 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.952080 kubelet[1549]: E0213 05:12:07.951999 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.952080 kubelet[1549]: W0213 05:12:07.952032 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.952375 kubelet[1549]: E0213 05:12:07.952149 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.952679 kubelet[1549]: E0213 05:12:07.952600 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.952679 kubelet[1549]: W0213 05:12:07.952634 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.953018 kubelet[1549]: E0213 05:12:07.952713 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.953206 kubelet[1549]: E0213 05:12:07.953147 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.953206 kubelet[1549]: W0213 05:12:07.953172 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.953433 kubelet[1549]: E0213 05:12:07.953264 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.953790 kubelet[1549]: E0213 05:12:07.953717 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.953790 kubelet[1549]: W0213 05:12:07.953741 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.953790 kubelet[1549]: E0213 05:12:07.953792 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.954295 kubelet[1549]: E0213 05:12:07.954267 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.954295 kubelet[1549]: W0213 05:12:07.954293 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.954515 kubelet[1549]: E0213 05:12:07.954359 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.954945 kubelet[1549]: E0213 05:12:07.954871 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.954945 kubelet[1549]: W0213 05:12:07.954896 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.954945 kubelet[1549]: E0213 05:12:07.954950 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.955476 kubelet[1549]: E0213 05:12:07.955397 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.955476 kubelet[1549]: W0213 05:12:07.955423 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.955760 kubelet[1549]: E0213 05:12:07.955547 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.955966 kubelet[1549]: E0213 05:12:07.955908 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.955966 kubelet[1549]: W0213 05:12:07.955941 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.956212 kubelet[1549]: E0213 05:12:07.956066 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.956488 kubelet[1549]: E0213 05:12:07.956406 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.956488 kubelet[1549]: W0213 05:12:07.956432 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.956772 kubelet[1549]: E0213 05:12:07.956553 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.956987 kubelet[1549]: E0213 05:12:07.956954 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.957100 kubelet[1549]: W0213 05:12:07.956989 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.957217 kubelet[1549]: E0213 05:12:07.957094 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.957481 kubelet[1549]: E0213 05:12:07.957452 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.957591 kubelet[1549]: W0213 05:12:07.957479 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.957591 kubelet[1549]: E0213 05:12:07.957573 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.958040 kubelet[1549]: E0213 05:12:07.958009 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.958140 kubelet[1549]: W0213 05:12:07.958044 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.958258 kubelet[1549]: E0213 05:12:07.958150 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.958626 kubelet[1549]: E0213 05:12:07.958593 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.958737 kubelet[1549]: W0213 05:12:07.958628 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.958836 kubelet[1549]: E0213 05:12:07.958737 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.959167 kubelet[1549]: E0213 05:12:07.959114 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.959167 kubelet[1549]: W0213 05:12:07.959147 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.959411 kubelet[1549]: E0213 05:12:07.959242 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.959743 kubelet[1549]: E0213 05:12:07.959661 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.959743 kubelet[1549]: W0213 05:12:07.959695 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.960027 kubelet[1549]: E0213 05:12:07.959795 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.960276 kubelet[1549]: E0213 05:12:07.960248 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.960403 kubelet[1549]: W0213 05:12:07.960275 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.960403 kubelet[1549]: E0213 05:12:07.960385 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.960965 kubelet[1549]: E0213 05:12:07.960883 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.960965 kubelet[1549]: W0213 05:12:07.960918 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.961250 kubelet[1549]: E0213 05:12:07.961051 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.961477 kubelet[1549]: E0213 05:12:07.961398 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.961477 kubelet[1549]: W0213 05:12:07.961423 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.961789 kubelet[1549]: E0213 05:12:07.961544 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.961906 kubelet[1549]: E0213 05:12:07.961890 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.962003 kubelet[1549]: W0213 05:12:07.961914 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.962101 kubelet[1549]: E0213 05:12:07.962003 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.962500 kubelet[1549]: E0213 05:12:07.962425 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.962500 kubelet[1549]: W0213 05:12:07.962491 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.962813 kubelet[1549]: E0213 05:12:07.962578 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.963076 kubelet[1549]: E0213 05:12:07.963015 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.963076 kubelet[1549]: W0213 05:12:07.963049 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.963286 kubelet[1549]: E0213 05:12:07.963142 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.963672 kubelet[1549]: E0213 05:12:07.963600 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.963672 kubelet[1549]: W0213 05:12:07.963632 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.963993 kubelet[1549]: E0213 05:12:07.963722 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.964220 kubelet[1549]: E0213 05:12:07.964190 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.964220 kubelet[1549]: W0213 05:12:07.964217 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.964432 kubelet[1549]: E0213 05:12:07.964304 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.964845 kubelet[1549]: E0213 05:12:07.964763 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.964845 kubelet[1549]: W0213 05:12:07.964804 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.965235 kubelet[1549]: E0213 05:12:07.964885 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.965462 kubelet[1549]: E0213 05:12:07.965378 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.965462 kubelet[1549]: W0213 05:12:07.965420 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.965774 kubelet[1549]: E0213 05:12:07.965560 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.966161 kubelet[1549]: E0213 05:12:07.966084 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.966161 kubelet[1549]: W0213 05:12:07.966128 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.966446 kubelet[1549]: E0213 05:12:07.966252 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.966827 kubelet[1549]: E0213 05:12:07.966751 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.966827 kubelet[1549]: W0213 05:12:07.966780 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.967110 kubelet[1549]: E0213 05:12:07.966874 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.967393 kubelet[1549]: E0213 05:12:07.967322 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.967393 kubelet[1549]: W0213 05:12:07.967377 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.967640 kubelet[1549]: E0213 05:12:07.967505 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.967919 kubelet[1549]: E0213 05:12:07.967865 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.967919 kubelet[1549]: W0213 05:12:07.967891 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.968159 kubelet[1549]: E0213 05:12:07.967981 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.968423 kubelet[1549]: E0213 05:12:07.968330 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.968423 kubelet[1549]: W0213 05:12:07.968380 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.968738 kubelet[1549]: E0213 05:12:07.968470 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.968941 kubelet[1549]: E0213 05:12:07.968891 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.968941 kubelet[1549]: W0213 05:12:07.968923 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.969150 kubelet[1549]: E0213 05:12:07.968981 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.969440 kubelet[1549]: E0213 05:12:07.969364 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.969440 kubelet[1549]: W0213 05:12:07.969392 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.969755 kubelet[1549]: E0213 05:12:07.969459 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.969869 kubelet[1549]: E0213 05:12:07.969822 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.969869 kubelet[1549]: W0213 05:12:07.969845 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.970053 kubelet[1549]: E0213 05:12:07.969894 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.970282 kubelet[1549]: E0213 05:12:07.970254 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.970282 kubelet[1549]: W0213 05:12:07.970278 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.970521 kubelet[1549]: E0213 05:12:07.970327 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.970817 kubelet[1549]: E0213 05:12:07.970792 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.970920 kubelet[1549]: W0213 05:12:07.970815 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.971018 kubelet[1549]: E0213 05:12:07.970928 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.971245 kubelet[1549]: E0213 05:12:07.971217 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.971317 kubelet[1549]: W0213 05:12:07.971245 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.971361 kubelet[1549]: E0213 05:12:07.971323 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.971471 kubelet[1549]: E0213 05:12:07.971435 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.971471 kubelet[1549]: W0213 05:12:07.971440 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.971471 kubelet[1549]: E0213 05:12:07.971455 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.971644 kubelet[1549]: E0213 05:12:07.971589 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.971644 kubelet[1549]: W0213 05:12:07.971609 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.971644 kubelet[1549]: E0213 05:12:07.971624 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.971786 kubelet[1549]: E0213 05:12:07.971756 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.971786 kubelet[1549]: W0213 05:12:07.971761 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.971786 kubelet[1549]: E0213 05:12:07.971768 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.971893 kubelet[1549]: E0213 05:12:07.971888 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.971949 kubelet[1549]: W0213 05:12:07.971892 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.971949 kubelet[1549]: E0213 05:12:07.971922 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.972101 kubelet[1549]: E0213 05:12:07.972095 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.972101 kubelet[1549]: W0213 05:12:07.972101 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.972160 kubelet[1549]: E0213 05:12:07.972110 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.972296 kubelet[1549]: E0213 05:12:07.972273 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.972314 kubelet[1549]: W0213 05:12:07.972297 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.972314 kubelet[1549]: E0213 05:12:07.972309 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.972407 kubelet[1549]: E0213 05:12:07.972402 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.972425 kubelet[1549]: W0213 05:12:07.972407 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.972425 kubelet[1549]: E0213 05:12:07.972413 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:07.977104 kubelet[1549]: E0213 05:12:07.977051 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:07.977104 kubelet[1549]: W0213 05:12:07.977057 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:07.977104 kubelet[1549]: E0213 05:12:07.977064 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:08.049385 env[1164]: time="2024-02-13T05:12:08.049245693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zsrwd,Uid:2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6,Namespace:calico-system,Attempt:0,}" Feb 13 05:12:08.051616 env[1164]: time="2024-02-13T05:12:08.051499288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgl8b,Uid:afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7,Namespace:kube-system,Attempt:0,}" Feb 13 05:12:08.725455 kubelet[1549]: E0213 05:12:08.725327 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:08.842046 env[1164]: time="2024-02-13T05:12:08.841993606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:08.843297 env[1164]: time="2024-02-13T05:12:08.843255853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:08.843942 env[1164]: time="2024-02-13T05:12:08.843906702Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:08.844652 env[1164]: time="2024-02-13T05:12:08.844612098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:08.845003 env[1164]: time="2024-02-13T05:12:08.844964622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:08.846182 env[1164]: time="2024-02-13T05:12:08.846146539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:08.846427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537676641.mount: Deactivated successfully. Feb 13 05:12:08.847685 env[1164]: time="2024-02-13T05:12:08.847636272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:08.848109 env[1164]: time="2024-02-13T05:12:08.848063348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:08.854293 env[1164]: time="2024-02-13T05:12:08.854260420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:12:08.854293 env[1164]: time="2024-02-13T05:12:08.854280793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:12:08.854380 env[1164]: time="2024-02-13T05:12:08.854287731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:12:08.854380 env[1164]: time="2024-02-13T05:12:08.854363634Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6e03892ca741e2cb6ce198f4bef83ab14ac23f3d8ca4cf47d6c5ad50090c517 pid=1693 runtime=io.containerd.runc.v2 Feb 13 05:12:08.855264 env[1164]: time="2024-02-13T05:12:08.855233121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 05:12:08.855264 env[1164]: time="2024-02-13T05:12:08.855253611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 05:12:08.855264 env[1164]: time="2024-02-13T05:12:08.855260744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 05:12:08.855342 env[1164]: time="2024-02-13T05:12:08.855317354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af81910bc562e18ed1fbb04c9c71a3b8dfb2d717ad2e41c64dc5ac3484ebb784 pid=1708 runtime=io.containerd.runc.v2 Feb 13 05:12:08.860097 systemd[1]: Started cri-containerd-c6e03892ca741e2cb6ce198f4bef83ab14ac23f3d8ca4cf47d6c5ad50090c517.scope. Feb 13 05:12:08.861618 systemd[1]: Started cri-containerd-af81910bc562e18ed1fbb04c9c71a3b8dfb2d717ad2e41c64dc5ac3484ebb784.scope. Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.864000 audit: BPF prog-id=59 op=LOAD Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1693 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653033383932636137343165326362366365313938663462656638 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1693 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653033383932636137343165326362366365313938663462656638 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit: BPF prog-id=60 op=LOAD Feb 13 05:12:08.865000 audit[1714]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000265d90 items=0 ppid=1693 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653033383932636137343165326362366365313938663462656638 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit: BPF prog-id=61 op=LOAD Feb 13 05:12:08.865000 audit[1714]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000265dd8 items=0 ppid=1693 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653033383932636137343165326362366365313938663462656638 Feb 13 05:12:08.865000 audit: BPF prog-id=61 op=UNLOAD Feb 13 05:12:08.865000 audit: BPF prog-id=60 op=UNLOAD Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { perfmon } for pid=1714 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit[1714]: AVC avc: denied { bpf } for pid=1714 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.865000 audit: BPF prog-id=62 op=LOAD Feb 13 05:12:08.865000 audit[1714]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0003c01e8 items=0 ppid=1693 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653033383932636137343165326362366365313938663462656638 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit: BPF prog-id=63 op=LOAD Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1708 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383139313062633536326531386564316662623034633963373161 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1708 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383139313062633536326531386564316662623034633963373161 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit: BPF prog-id=64 op=LOAD Feb 13 05:12:08.868000 audit[1719]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00020f690 items=0 ppid=1708 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383139313062633536326531386564316662623034633963373161 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit: BPF prog-id=65 op=LOAD Feb 13 05:12:08.868000 audit[1719]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00020f6d8 items=0 ppid=1708 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383139313062633536326531386564316662623034633963373161 Feb 13 05:12:08.868000 audit: BPF prog-id=65 op=UNLOAD Feb 13 05:12:08.868000 audit: BPF prog-id=64 op=UNLOAD Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { perfmon } for pid=1719 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit[1719]: AVC avc: denied { bpf } for pid=1719 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:08.868000 audit: BPF prog-id=66 op=LOAD Feb 13 05:12:08.868000 audit[1719]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00020fae8 items=0 ppid=1708 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:08.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166383139313062633536326531386564316662623034633963373161 Feb 13 05:12:08.871297 env[1164]: time="2024-02-13T05:12:08.871273421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgl8b,Uid:afbf8dc2-46dd-4b6b-bbe0-73589e0ed4a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6e03892ca741e2cb6ce198f4bef83ab14ac23f3d8ca4cf47d6c5ad50090c517\"" Feb 13 05:12:08.872300 env[1164]: time="2024-02-13T05:12:08.872287906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 13 05:12:08.874261 env[1164]: time="2024-02-13T05:12:08.874245135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zsrwd,Uid:2593c255-9bd9-4c8a-a0c8-d2724bc0d7f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"af81910bc562e18ed1fbb04c9c71a3b8dfb2d717ad2e41c64dc5ac3484ebb784\"" Feb 13 05:12:08.883758 kubelet[1549]: E0213 05:12:08.883720 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:09.725632 kubelet[1549]: E0213 05:12:09.725585 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:10.105344 env[1164]: time="2024-02-13T05:12:10.105249779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:10.105974 env[1164]: time="2024-02-13T05:12:10.105927215Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:10.106583 env[1164]: time="2024-02-13T05:12:10.106539961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:10.126296 env[1164]: time="2024-02-13T05:12:10.126228391Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:10.127021 env[1164]: time="2024-02-13T05:12:10.126953102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 13 05:12:10.127736 env[1164]: time="2024-02-13T05:12:10.127666290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 13 05:12:10.129292 env[1164]: time="2024-02-13T05:12:10.129222719Z" level=info msg="CreateContainer within sandbox \"c6e03892ca741e2cb6ce198f4bef83ab14ac23f3d8ca4cf47d6c5ad50090c517\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 05:12:10.142766 env[1164]: time="2024-02-13T05:12:10.142687248Z" level=info msg="CreateContainer within sandbox \"c6e03892ca741e2cb6ce198f4bef83ab14ac23f3d8ca4cf47d6c5ad50090c517\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f9a002931eca491ca1a86554ccd10e42b6d2047af968bc10ed57ca5ef15c57c1\"" Feb 13 05:12:10.143442 env[1164]: time="2024-02-13T05:12:10.143394499Z" level=info msg="StartContainer for \"f9a002931eca491ca1a86554ccd10e42b6d2047af968bc10ed57ca5ef15c57c1\"" Feb 13 05:12:10.164984 systemd[1]: Started cri-containerd-f9a002931eca491ca1a86554ccd10e42b6d2047af968bc10ed57ca5ef15c57c1.scope. Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.199847 kernel: kauditd_printk_skb: 332 callbacks suppressed Feb 13 05:12:10.199904 kernel: audit: type=1400 audit(1707801130.171:550): avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7f40fdbe8518 items=0 ppid=1693 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.360109 kernel: audit: type=1300 audit(1707801130.171:550): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7f40fdbe8518 items=0 ppid=1693 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.360144 kernel: audit: type=1327 audit(1707801130.171:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613030323933316563613439316361316138363535346363643130 Feb 13 05:12:10.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613030323933316563613439316361316138363535346363643130 Feb 13 05:12:10.453377 kernel: audit: type=1400 audit(1707801130.171:551): avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.581851 kernel: audit: type=1400 audit(1707801130.171:551): avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.581914 kernel: audit: type=1400 audit(1707801130.171:551): avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.645827 kernel: audit: type=1400 audit(1707801130.171:551): avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.651295 env[1164]: time="2024-02-13T05:12:10.651268186Z" level=info msg="StartContainer for \"f9a002931eca491ca1a86554ccd10e42b6d2047af968bc10ed57ca5ef15c57c1\" returns successfully" Feb 13 05:12:10.709602 kernel: audit: type=1400 audit(1707801130.171:551): avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.725698 kubelet[1549]: E0213 05:12:10.725653 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:10.773468 kernel: audit: type=1400 audit(1707801130.171:551): avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.837313 kernel: audit: type=1400 audit(1707801130.171:551): avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.883896 kubelet[1549]: E0213 05:12:10.883856 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.171000 audit: BPF prog-id=67 op=LOAD Feb 13 05:12:10.171000 audit[1771]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c000275ba8 items=0 ppid=1693 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613030323933316563613439316361316138363535346363643130 Feb 13 05:12:10.261000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.261000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.261000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.261000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.261000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.261000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.261000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.261000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.261000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.261000 audit: BPF prog-id=68 op=LOAD Feb 13 05:12:10.261000 audit[1771]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c000275bf8 items=0 ppid=1693 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613030323933316563613439316361316138363535346363643130 Feb 13 05:12:10.452000 audit: BPF prog-id=68 op=UNLOAD Feb 13 05:12:10.452000 audit: BPF prog-id=67 op=UNLOAD Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { perfmon } for pid=1771 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit[1771]: AVC avc: denied { bpf } for pid=1771 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:10.452000 audit: BPF prog-id=69 op=LOAD Feb 13 05:12:10.452000 audit[1771]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c000275c88 items=0 ppid=1693 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639613030323933316563613439316361316138363535346363643130 Feb 13 05:12:10.682000 audit[1831]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=1831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.682000 audit[1831]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffecf465830 a2=0 a3=7ffecf46581c items=0 ppid=1781 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 13 05:12:10.683000 audit[1832]: NETFILTER_CFG table=mangle:15 family=10 entries=1 op=nft_register_chain pid=1832 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:10.683000 audit[1832]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffefff724e0 a2=0 a3=7ffefff724cc items=0 ppid=1781 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.683000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 13 05:12:10.683000 audit[1833]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_chain pid=1833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.683000 audit[1833]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbb4b17b0 a2=0 a3=7ffcbb4b179c items=0 ppid=1781 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 13 05:12:10.683000 audit[1834]: NETFILTER_CFG table=nat:17 family=10 entries=1 op=nft_register_chain pid=1834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:10.683000 audit[1834]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeeb7992f0 a2=0 a3=7ffeeb7992dc items=0 ppid=1781 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.683000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 13 05:12:10.683000 audit[1835]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_chain pid=1835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.683000 audit[1835]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe660ef740 a2=0 a3=7ffe660ef72c items=0 ppid=1781 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 13 05:12:10.684000 audit[1836]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:10.684000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd42825570 a2=0 a3=7ffd4282555c items=0 ppid=1781 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 13 05:12:10.785000 audit[1837]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.785000 audit[1837]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd66895390 a2=0 a3=7ffd6689537c items=0 ppid=1781 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 13 05:12:10.787000 audit[1839]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.787000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffce969fc80 a2=0 a3=7ffce969fc6c items=0 ppid=1781 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 13 05:12:10.901000 audit[1842]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.901000 audit[1842]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc35eb5db0 a2=0 a3=7ffc35eb5d9c items=0 ppid=1781 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.901000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 13 05:12:10.902000 audit[1843]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.902000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc462a7e50 a2=0 a3=7ffc462a7e3c items=0 ppid=1781 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.902000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 13 05:12:10.903000 audit[1845]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.903000 audit[1845]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcce8561d0 a2=0 a3=7ffcce8561bc items=0 ppid=1781 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 13 05:12:10.903000 audit[1846]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.903000 audit[1846]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe87f37470 a2=0 a3=7ffe87f3745c items=0 ppid=1781 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 13 05:12:10.905308 kubelet[1549]: I0213 05:12:10.905268 1549 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qgl8b" podStartSLOduration=2.649885856 podCreationTimestamp="2024-02-13 05:12:07 +0000 UTC" firstStartedPulling="2024-02-13 05:12:08.872054576 +0000 UTC m=+3.424018610" lastFinishedPulling="2024-02-13 05:12:10.12741126 +0000 UTC m=+4.679375325" observedRunningTime="2024-02-13 05:12:10.905020431 +0000 UTC m=+5.456984468" watchObservedRunningTime="2024-02-13 05:12:10.905242571 +0000 UTC m=+5.457206608" Feb 13 05:12:10.905000 audit[1848]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.905000 audit[1848]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeb3caee30 a2=0 a3=7ffeb3caee1c items=0 ppid=1781 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 13 05:12:10.906000 audit[1851]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1851 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.906000 audit[1851]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd0c2c89a0 a2=0 a3=7ffd0c2c898c items=0 ppid=1781 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 13 05:12:10.907000 audit[1852]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.907000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1f986870 a2=0 a3=7ffe1f98685c items=0 ppid=1781 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 13 05:12:10.908000 audit[1854]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.908000 audit[1854]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffdab9df50 a2=0 a3=7fffdab9df3c items=0 ppid=1781 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.908000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 13 05:12:10.909000 audit[1855]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.909000 audit[1855]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc79fbfac0 a2=0 a3=7ffc79fbfaac items=0 ppid=1781 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 13 05:12:10.910000 audit[1857]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.910000 audit[1857]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff26a4e7e0 a2=0 a3=7fff26a4e7cc items=0 ppid=1781 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 13 05:12:10.912000 audit[1860]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.912000 audit[1860]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe6248bc0 a2=0 a3=7fffe6248bac items=0 ppid=1781 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 13 05:12:10.914000 audit[1863]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.914000 audit[1863]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe34056e30 a2=0 a3=7ffe34056e1c items=0 ppid=1781 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 13 05:12:10.914000 audit[1864]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.914000 audit[1864]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe77d2e6c0 a2=0 a3=7ffe77d2e6ac items=0 ppid=1781 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 13 05:12:10.915000 audit[1866]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.915000 audit[1866]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff78140f10 a2=0 a3=7fff78140efc items=0 ppid=1781 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 13 05:12:10.939000 audit[1871]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.939000 audit[1871]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffdeba60680 a2=0 a3=7ffdeba6066c items=0 ppid=1781 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 13 05:12:10.942000 audit[1876]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.942000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6c1f9220 a2=0 a3=7ffc6c1f920c items=0 ppid=1781 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.942000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 13 05:12:10.943000 audit[1878]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 13 05:12:10.943000 audit[1878]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc59132ae0 a2=0 a3=7ffc59132acc items=0 ppid=1781 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 13 05:12:10.947000 audit[1880]: NETFILTER_CFG table=filter:39 family=2 entries=5 op=nft_register_rule pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:12:10.947000 audit[1880]: SYSCALL arch=c000003e syscall=46 success=yes exit=2844 a0=3 a1=7fff519d87c0 a2=0 a3=7fff519d87ac items=0 ppid=1781 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:12:10.951215 kubelet[1549]: E0213 05:12:10.951178 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.951215 kubelet[1549]: W0213 05:12:10.951189 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.951215 kubelet[1549]: E0213 05:12:10.951203 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.951327 kubelet[1549]: E0213 05:12:10.951321 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.951327 kubelet[1549]: W0213 05:12:10.951326 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.951379 kubelet[1549]: E0213 05:12:10.951338 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.951549 kubelet[1549]: E0213 05:12:10.951507 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.951549 kubelet[1549]: W0213 05:12:10.951514 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.951549 kubelet[1549]: E0213 05:12:10.951522 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.951724 kubelet[1549]: E0213 05:12:10.951684 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.951724 kubelet[1549]: W0213 05:12:10.951691 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.951724 kubelet[1549]: E0213 05:12:10.951699 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.951865 kubelet[1549]: E0213 05:12:10.951819 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.951865 kubelet[1549]: W0213 05:12:10.951826 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.951865 kubelet[1549]: E0213 05:12:10.951834 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952000 kubelet[1549]: E0213 05:12:10.951961 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952000 kubelet[1549]: W0213 05:12:10.951969 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952000 kubelet[1549]: E0213 05:12:10.951977 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952076 kubelet[1549]: E0213 05:12:10.952071 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952102 kubelet[1549]: W0213 05:12:10.952076 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952102 kubelet[1549]: E0213 05:12:10.952083 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952160 kubelet[1549]: E0213 05:12:10.952155 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952182 kubelet[1549]: W0213 05:12:10.952160 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952182 kubelet[1549]: E0213 05:12:10.952166 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952239 kubelet[1549]: E0213 05:12:10.952233 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952261 kubelet[1549]: W0213 05:12:10.952239 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952261 kubelet[1549]: E0213 05:12:10.952245 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952346 kubelet[1549]: E0213 05:12:10.952340 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952346 kubelet[1549]: W0213 05:12:10.952345 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952393 kubelet[1549]: E0213 05:12:10.952352 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952427 kubelet[1549]: E0213 05:12:10.952422 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952450 kubelet[1549]: W0213 05:12:10.952427 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952450 kubelet[1549]: E0213 05:12:10.952434 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952541 kubelet[1549]: E0213 05:12:10.952537 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952564 kubelet[1549]: W0213 05:12:10.952541 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952564 kubelet[1549]: E0213 05:12:10.952547 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952657 kubelet[1549]: E0213 05:12:10.952652 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952681 kubelet[1549]: W0213 05:12:10.952657 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952681 kubelet[1549]: E0213 05:12:10.952668 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952742 kubelet[1549]: E0213 05:12:10.952737 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952764 kubelet[1549]: W0213 05:12:10.952742 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952764 kubelet[1549]: E0213 05:12:10.952748 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952818 kubelet[1549]: E0213 05:12:10.952814 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952841 kubelet[1549]: W0213 05:12:10.952818 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952841 kubelet[1549]: E0213 05:12:10.952824 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.952897 kubelet[1549]: E0213 05:12:10.952892 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.952919 kubelet[1549]: W0213 05:12:10.952897 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.952919 kubelet[1549]: E0213 05:12:10.952903 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.976000 audit[1880]: NETFILTER_CFG table=nat:40 family=2 entries=65 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:12:10.976000 audit[1880]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7fff519d87c0 a2=0 a3=7fff519d87ac items=0 ppid=1781 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:12:10.989268 kubelet[1549]: E0213 05:12:10.989249 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.989268 kubelet[1549]: W0213 05:12:10.989264 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.989406 kubelet[1549]: E0213 05:12:10.989281 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.989517 kubelet[1549]: E0213 05:12:10.989473 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.989517 kubelet[1549]: W0213 05:12:10.989483 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.989517 kubelet[1549]: E0213 05:12:10.989500 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.989758 kubelet[1549]: E0213 05:12:10.989716 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.989758 kubelet[1549]: W0213 05:12:10.989725 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.989758 kubelet[1549]: E0213 05:12:10.989741 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.989939 kubelet[1549]: E0213 05:12:10.989923 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.989939 kubelet[1549]: W0213 05:12:10.989932 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.990025 kubelet[1549]: E0213 05:12:10.989949 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.990125 kubelet[1549]: E0213 05:12:10.990095 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.990125 kubelet[1549]: W0213 05:12:10.990103 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.990125 kubelet[1549]: E0213 05:12:10.990117 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.990295 kubelet[1549]: E0213 05:12:10.990285 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.990343 kubelet[1549]: W0213 05:12:10.990295 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.990343 kubelet[1549]: E0213 05:12:10.990309 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.990582 kubelet[1549]: E0213 05:12:10.990570 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.990629 kubelet[1549]: W0213 05:12:10.990584 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.990629 kubelet[1549]: E0213 05:12:10.990606 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.990844 kubelet[1549]: E0213 05:12:10.990801 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.990844 kubelet[1549]: W0213 05:12:10.990814 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.990844 kubelet[1549]: E0213 05:12:10.990833 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.991006 kubelet[1549]: E0213 05:12:10.990996 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.991006 kubelet[1549]: W0213 05:12:10.991006 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.991093 kubelet[1549]: E0213 05:12:10.991020 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.991195 kubelet[1549]: E0213 05:12:10.991185 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.991236 kubelet[1549]: W0213 05:12:10.991195 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.991236 kubelet[1549]: E0213 05:12:10.991212 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.991451 kubelet[1549]: E0213 05:12:10.991440 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.991505 kubelet[1549]: W0213 05:12:10.991451 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.991505 kubelet[1549]: E0213 05:12:10.991472 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.991697 kubelet[1549]: E0213 05:12:10.991656 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:10.991697 kubelet[1549]: W0213 05:12:10.991665 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:10.991697 kubelet[1549]: E0213 05:12:10.991678 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:10.995000 audit[1917]: NETFILTER_CFG table=filter:41 family=2 entries=8 op=nft_register_rule pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:12:10.995000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=2844 a0=3 a1=7fffce3feaf0 a2=0 a3=7fffce3feadc items=0 ppid=1781 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:10.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:12:11.011000 audit[1917]: NETFILTER_CFG table=nat:42 family=2 entries=22 op=nft_register_rule pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:12:11.011000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=6212 a0=3 a1=7fffce3feaf0 a2=0 a3=7fffce3feadc items=0 ppid=1781 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:12:11.012000 audit[1918]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.012000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdb8d51220 a2=0 a3=7ffdb8d5120c items=0 ppid=1781 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 13 05:12:11.016000 audit[1920]: NETFILTER_CFG table=filter:44 family=10 entries=2 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.016000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc3d7e2ac0 a2=0 a3=7ffc3d7e2aac items=0 ppid=1781 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.016000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 13 05:12:11.038000 audit[1923]: NETFILTER_CFG table=filter:45 family=10 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.038000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc26d9b180 a2=0 a3=7ffc26d9b16c items=0 ppid=1781 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 13 05:12:11.041000 audit[1924]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.041000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb1e25870 a2=0 a3=7ffeb1e2585c items=0 ppid=1781 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.041000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 13 05:12:11.047000 audit[1926]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.047000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2063cb50 a2=0 a3=7ffe2063cb3c items=0 ppid=1781 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 13 05:12:11.050000 audit[1927]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.050000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2b178b70 a2=0 a3=7ffc2b178b5c items=0 ppid=1781 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 13 05:12:11.056000 audit[1929]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.056000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd1037c0c0 a2=0 a3=7ffd1037c0ac items=0 ppid=1781 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 13 05:12:11.065000 audit[1932]: NETFILTER_CFG table=filter:50 family=10 entries=2 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.065000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffeb466e620 a2=0 a3=7ffeb466e60c items=0 ppid=1781 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 13 05:12:11.068000 audit[1933]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.068000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffaa620150 a2=0 a3=7fffaa62013c items=0 ppid=1781 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 13 05:12:11.074000 audit[1935]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.074000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde1942900 a2=0 a3=7ffde19428ec items=0 ppid=1781 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 13 05:12:11.077000 audit[1936]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.077000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca2901a20 a2=0 a3=7ffca2901a0c items=0 ppid=1781 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 13 05:12:11.083000 audit[1938]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.083000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb70d6db0 a2=0 a3=7ffeb70d6d9c items=0 ppid=1781 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 13 05:12:11.092000 audit[1941]: NETFILTER_CFG table=filter:55 family=10 entries=1 op=nft_register_rule pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.092000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdbef58780 a2=0 a3=7ffdbef5876c items=0 ppid=1781 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 13 05:12:11.101000 audit[1944]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.101000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7cf23800 a2=0 a3=7ffc7cf237ec items=0 ppid=1781 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.101000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 13 05:12:11.104000 audit[1945]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.104000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe946ae910 a2=0 a3=7ffe946ae8fc items=0 ppid=1781 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 13 05:12:11.109000 audit[1947]: NETFILTER_CFG table=nat:58 family=10 entries=2 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.109000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdf922d340 a2=0 a3=7ffdf922d32c items=0 ppid=1781 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.109000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 13 05:12:11.117000 audit[1950]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.117000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe61b708d0 a2=0 a3=7ffe61b708bc items=0 ppid=1781 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 13 05:12:11.120000 audit[1951]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.120000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2e809c40 a2=0 a3=7fff2e809c2c items=0 ppid=1781 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 13 05:12:11.126000 audit[1953]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=1953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.126000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe0d9b6560 a2=0 a3=7ffe0d9b654c items=0 ppid=1781 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.126000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 13 05:12:11.134000 audit[1956]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.134000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff62d2fb70 a2=0 a3=7fff62d2fb5c items=0 ppid=1781 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.134000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 13 05:12:11.137000 audit[1957]: NETFILTER_CFG table=nat:63 family=10 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.137000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff402252a0 a2=0 a3=7fff4022528c items=0 ppid=1781 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 13 05:12:11.143000 audit[1959]: NETFILTER_CFG table=nat:64 family=10 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 13 05:12:11.143000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffb5b0b2f0 a2=0 a3=7fffb5b0b2dc items=0 ppid=1781 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 13 05:12:11.152000 audit[1962]: NETFILTER_CFG table=filter:65 family=10 entries=3 op=nft_register_rule pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 13 05:12:11.152000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe7a07d7f0 a2=0 a3=7ffe7a07d7dc items=0 ppid=1781 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.152000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:12:11.154000 audit[1962]: NETFILTER_CFG table=nat:66 family=10 entries=7 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 13 05:12:11.154000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffe7a07d7f0 a2=0 a3=7ffe7a07d7dc items=0 ppid=1781 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:11.154000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:12:11.726610 kubelet[1549]: E0213 05:12:11.726492 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:11.960915 kubelet[1549]: E0213 05:12:11.960820 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.960915 kubelet[1549]: W0213 05:12:11.960859 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.960915 kubelet[1549]: E0213 05:12:11.960909 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.961442 kubelet[1549]: E0213 05:12:11.961414 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.961442 kubelet[1549]: W0213 05:12:11.961439 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.961640 kubelet[1549]: E0213 05:12:11.961480 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.962060 kubelet[1549]: E0213 05:12:11.961979 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.962060 kubelet[1549]: W0213 05:12:11.962013 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.962060 kubelet[1549]: E0213 05:12:11.962054 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.962728 kubelet[1549]: E0213 05:12:11.962647 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.962728 kubelet[1549]: W0213 05:12:11.962682 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.962728 kubelet[1549]: E0213 05:12:11.962723 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.963246 kubelet[1549]: E0213 05:12:11.963213 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.963246 kubelet[1549]: W0213 05:12:11.963240 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.963489 kubelet[1549]: E0213 05:12:11.963276 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.963882 kubelet[1549]: E0213 05:12:11.963801 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.963882 kubelet[1549]: W0213 05:12:11.963835 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.963882 kubelet[1549]: E0213 05:12:11.963876 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.964512 kubelet[1549]: E0213 05:12:11.964428 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.964512 kubelet[1549]: W0213 05:12:11.964453 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.964512 kubelet[1549]: E0213 05:12:11.964486 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.965056 kubelet[1549]: E0213 05:12:11.964981 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.965056 kubelet[1549]: W0213 05:12:11.965015 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.965056 kubelet[1549]: E0213 05:12:11.965056 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.965659 kubelet[1549]: E0213 05:12:11.965568 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.965659 kubelet[1549]: W0213 05:12:11.965602 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.965659 kubelet[1549]: E0213 05:12:11.965643 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.966287 kubelet[1549]: E0213 05:12:11.966258 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.966287 kubelet[1549]: W0213 05:12:11.966285 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.966557 kubelet[1549]: E0213 05:12:11.966323 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.966895 kubelet[1549]: E0213 05:12:11.966821 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.966895 kubelet[1549]: W0213 05:12:11.966854 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.966895 kubelet[1549]: E0213 05:12:11.966896 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.967449 kubelet[1549]: E0213 05:12:11.967370 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.967449 kubelet[1549]: W0213 05:12:11.967396 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.967449 kubelet[1549]: E0213 05:12:11.967428 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.968002 kubelet[1549]: E0213 05:12:11.967946 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.968002 kubelet[1549]: W0213 05:12:11.967979 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.968260 kubelet[1549]: E0213 05:12:11.968023 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.968631 kubelet[1549]: E0213 05:12:11.968558 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.968631 kubelet[1549]: W0213 05:12:11.968592 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.968631 kubelet[1549]: E0213 05:12:11.968632 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.969171 kubelet[1549]: E0213 05:12:11.969142 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.969171 kubelet[1549]: W0213 05:12:11.969169 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.969418 kubelet[1549]: E0213 05:12:11.969205 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.969796 kubelet[1549]: E0213 05:12:11.969711 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.969796 kubelet[1549]: W0213 05:12:11.969744 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.969796 kubelet[1549]: E0213 05:12:11.969786 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.996498 kubelet[1549]: E0213 05:12:11.996295 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.996498 kubelet[1549]: W0213 05:12:11.996357 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.996498 kubelet[1549]: E0213 05:12:11.996410 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.997068 kubelet[1549]: E0213 05:12:11.996978 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.997068 kubelet[1549]: W0213 05:12:11.997012 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.997068 kubelet[1549]: E0213 05:12:11.997062 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.997644 kubelet[1549]: E0213 05:12:11.997564 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.997644 kubelet[1549]: W0213 05:12:11.997597 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.997644 kubelet[1549]: E0213 05:12:11.997645 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.998202 kubelet[1549]: E0213 05:12:11.998148 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.998202 kubelet[1549]: W0213 05:12:11.998172 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.998458 kubelet[1549]: E0213 05:12:11.998211 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.998796 kubelet[1549]: E0213 05:12:11.998714 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.998796 kubelet[1549]: W0213 05:12:11.998749 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.999128 kubelet[1549]: E0213 05:12:11.998884 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.999280 kubelet[1549]: E0213 05:12:11.999251 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.999416 kubelet[1549]: W0213 05:12:11.999278 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.999416 kubelet[1549]: E0213 05:12:11.999323 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:11.999906 kubelet[1549]: E0213 05:12:11.999824 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:11.999906 kubelet[1549]: W0213 05:12:11.999860 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:11.999906 kubelet[1549]: E0213 05:12:11.999909 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:12.000354 kubelet[1549]: E0213 05:12:12.000321 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:12.000473 kubelet[1549]: W0213 05:12:12.000362 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:12.000473 kubelet[1549]: E0213 05:12:12.000455 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:12.000889 kubelet[1549]: E0213 05:12:12.000815 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:12.000889 kubelet[1549]: W0213 05:12:12.000849 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:12.000889 kubelet[1549]: E0213 05:12:12.000898 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:12.001655 kubelet[1549]: E0213 05:12:12.001510 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:12.001655 kubelet[1549]: W0213 05:12:12.001544 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:12.001655 kubelet[1549]: E0213 05:12:12.001616 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:12.001931 kubelet[1549]: E0213 05:12:12.001892 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:12.001931 kubelet[1549]: W0213 05:12:12.001898 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:12.001931 kubelet[1549]: E0213 05:12:12.001926 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:12.002111 kubelet[1549]: E0213 05:12:12.002106 1549 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 05:12:12.002111 kubelet[1549]: W0213 05:12:12.002110 1549 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 05:12:12.002187 kubelet[1549]: E0213 05:12:12.002140 1549 plugins.go:729] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 05:12:12.726969 kubelet[1549]: E0213 05:12:12.726906 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:12.883720 kubelet[1549]: E0213 05:12:12.883659 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:13.466681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629955499.mount: Deactivated successfully. Feb 13 05:12:13.728019 kubelet[1549]: E0213 05:12:13.727797 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:14.727977 kubelet[1549]: E0213 05:12:14.727926 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:14.755741 systemd-timesyncd[1110]: Contacted time server [2604:a880:400:d0::83:2002]:123 (2.flatcar.pool.ntp.org). Feb 13 05:12:14.755892 systemd-timesyncd[1110]: Initial clock synchronization to Tue 2024-02-13 05:12:14.855212 UTC. Feb 13 05:12:14.883464 kubelet[1549]: E0213 05:12:14.883413 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:15.729077 kubelet[1549]: E0213 05:12:15.729004 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:16.730170 kubelet[1549]: E0213 05:12:16.730061 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:16.883939 kubelet[1549]: E0213 05:12:16.883884 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:17.731381 kubelet[1549]: E0213 05:12:17.731303 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:18.731654 kubelet[1549]: E0213 05:12:18.731589 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:18.883157 kubelet[1549]: E0213 05:12:18.883106 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:19.732726 kubelet[1549]: E0213 05:12:19.732652 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:20.733285 kubelet[1549]: E0213 05:12:20.733215 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:20.884004 kubelet[1549]: E0213 05:12:20.883941 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:21.578127 env[1164]: time="2024-02-13T05:12:21.578074050Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:21.578729 env[1164]: time="2024-02-13T05:12:21.578689460Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:21.580057 env[1164]: time="2024-02-13T05:12:21.580014686Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:21.580879 env[1164]: time="2024-02-13T05:12:21.580841124Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:21.581878 env[1164]: time="2024-02-13T05:12:21.581836523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 13 05:12:21.582668 env[1164]: time="2024-02-13T05:12:21.582625535Z" level=info msg="CreateContainer within sandbox \"af81910bc562e18ed1fbb04c9c71a3b8dfb2d717ad2e41c64dc5ac3484ebb784\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 05:12:21.587754 env[1164]: time="2024-02-13T05:12:21.587709809Z" level=info msg="CreateContainer within sandbox \"af81910bc562e18ed1fbb04c9c71a3b8dfb2d717ad2e41c64dc5ac3484ebb784\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a5c6cc6903fd24976a1431d8fb850f4f5e76261cb511e1822439e95fb27697de\"" Feb 13 05:12:21.587956 env[1164]: time="2024-02-13T05:12:21.587938127Z" level=info msg="StartContainer for \"a5c6cc6903fd24976a1431d8fb850f4f5e76261cb511e1822439e95fb27697de\"" Feb 13 05:12:21.597008 systemd[1]: Started cri-containerd-a5c6cc6903fd24976a1431d8fb850f4f5e76261cb511e1822439e95fb27697de.scope. Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.632452 kernel: kauditd_printk_skb: 192 callbacks suppressed Feb 13 05:12:21.632536 kernel: audit: type=1400 audit(1707801141.603:609): avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7f93f4d78488 items=0 ppid=1708 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:21.697340 kernel: audit: type=1300 audit(1707801141.603:609): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=7f93f4d78488 items=0 ppid=1708 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:21.734199 kubelet[1549]: E0213 05:12:21.734160 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:21.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135633663633639303366643234393736613134333164386662383530 Feb 13 05:12:21.888937 kernel: audit: type=1327 audit(1707801141.603:609): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135633663633639303366643234393736613134333164386662383530 Feb 13 05:12:21.888972 kernel: audit: type=1400 audit(1707801141.603:610): avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.952871 kernel: audit: type=1400 audit(1707801141.603:610): avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:22.016732 kernel: audit: type=1400 audit(1707801141.603:610): avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:22.080605 kernel: audit: type=1400 audit(1707801141.603:610): avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:22.086503 env[1164]: time="2024-02-13T05:12:22.086476923Z" level=info msg="StartContainer for \"a5c6cc6903fd24976a1431d8fb850f4f5e76261cb511e1822439e95fb27697de\" returns successfully" Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:22.146722 systemd[1]: cri-containerd-a5c6cc6903fd24976a1431d8fb850f4f5e76261cb511e1822439e95fb27697de.scope: Deactivated successfully. Feb 13 05:12:22.156922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5c6cc6903fd24976a1431d8fb850f4f5e76261cb511e1822439e95fb27697de-rootfs.mount: Deactivated successfully. Feb 13 05:12:22.211023 kernel: audit: type=1400 audit(1707801141.603:610): avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:22.211066 kernel: audit: type=1400 audit(1707801141.603:610): avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:22.275492 kernel: audit: type=1400 audit(1707801141.603:610): avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.603000 audit: BPF prog-id=70 op=LOAD Feb 13 05:12:21.603000 audit[1998]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00027dba8 items=0 ppid=1708 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:21.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135633663633639303366643234393736613134333164386662383530 Feb 13 05:12:21.696000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.696000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.696000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.696000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.696000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.696000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.696000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.696000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.696000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.696000 audit: BPF prog-id=71 op=LOAD Feb 13 05:12:21.696000 audit[1998]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0001f9a80 items=0 ppid=1708 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:21.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135633663633639303366643234393736613134333164386662383530 Feb 13 05:12:21.888000 audit: BPF prog-id=71 op=UNLOAD Feb 13 05:12:21.888000 audit: BPF prog-id=70 op=UNLOAD Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { perfmon } for pid=1998 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit[1998]: AVC avc: denied { bpf } for pid=1998 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:21.888000 audit: BPF prog-id=72 op=LOAD Feb 13 05:12:21.888000 audit[1998]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0001f9b10 items=0 ppid=1708 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:21.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135633663633639303366643234393736613134333164386662383530 Feb 13 05:12:22.343000 audit: BPF prog-id=72 op=UNLOAD Feb 13 05:12:22.454328 env[1164]: time="2024-02-13T05:12:22.454107413Z" level=info msg="shim disconnected" id=a5c6cc6903fd24976a1431d8fb850f4f5e76261cb511e1822439e95fb27697de Feb 13 05:12:22.454328 env[1164]: time="2024-02-13T05:12:22.454212281Z" level=warning msg="cleaning up after shim disconnected" id=a5c6cc6903fd24976a1431d8fb850f4f5e76261cb511e1822439e95fb27697de namespace=k8s.io Feb 13 05:12:22.454328 env[1164]: time="2024-02-13T05:12:22.454241879Z" level=info msg="cleaning up dead shim" Feb 13 05:12:22.469673 env[1164]: time="2024-02-13T05:12:22.469607873Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:12:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2047 runtime=io.containerd.runc.v2\n" Feb 13 05:12:22.735309 kubelet[1549]: E0213 05:12:22.735102 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:22.884086 kubelet[1549]: E0213 05:12:22.883977 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:22.922437 env[1164]: time="2024-02-13T05:12:22.922355076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 13 05:12:23.736329 kubelet[1549]: E0213 05:12:23.736226 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:24.737372 kubelet[1549]: E0213 05:12:24.737254 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:24.883316 kubelet[1549]: E0213 05:12:24.883210 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:25.723302 kubelet[1549]: E0213 05:12:25.723237 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:25.737882 kubelet[1549]: E0213 05:12:25.737773 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:26.410132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321965361.mount: Deactivated successfully. Feb 13 05:12:26.737957 kubelet[1549]: E0213 05:12:26.737913 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:26.883221 kubelet[1549]: E0213 05:12:26.883153 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:27.739062 kubelet[1549]: E0213 05:12:27.738961 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:27.821189 update_engine[1156]: I0213 05:12:27.821065 1156 update_attempter.cc:509] Updating boot flags... Feb 13 05:12:28.739948 kubelet[1549]: E0213 05:12:28.739819 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:28.884153 kubelet[1549]: E0213 05:12:28.884072 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:29.740533 kubelet[1549]: E0213 05:12:29.740424 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:30.740621 kubelet[1549]: E0213 05:12:30.740580 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:30.883987 kubelet[1549]: E0213 05:12:30.883886 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:31.741868 kubelet[1549]: E0213 05:12:31.741757 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:32.742965 kubelet[1549]: E0213 05:12:32.742896 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:32.883994 kubelet[1549]: E0213 05:12:32.883923 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:33.743444 kubelet[1549]: E0213 05:12:33.743323 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:34.743679 kubelet[1549]: E0213 05:12:34.743575 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:34.884245 kubelet[1549]: E0213 05:12:34.884136 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:35.743854 kubelet[1549]: E0213 05:12:35.743798 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:36.744607 kubelet[1549]: E0213 05:12:36.744500 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:36.884104 kubelet[1549]: E0213 05:12:36.883995 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:37.745490 kubelet[1549]: E0213 05:12:37.745448 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:38.746273 kubelet[1549]: E0213 05:12:38.746229 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:38.883604 kubelet[1549]: E0213 05:12:38.883587 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:38.961466 env[1164]: time="2024-02-13T05:12:38.961444489Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:38.962207 env[1164]: time="2024-02-13T05:12:38.962150980Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:38.963280 env[1164]: time="2024-02-13T05:12:38.963249178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:38.964464 env[1164]: time="2024-02-13T05:12:38.964389426Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 05:12:38.964947 env[1164]: time="2024-02-13T05:12:38.964898426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 13 05:12:38.965936 env[1164]: time="2024-02-13T05:12:38.965923883Z" level=info msg="CreateContainer within sandbox \"af81910bc562e18ed1fbb04c9c71a3b8dfb2d717ad2e41c64dc5ac3484ebb784\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 05:12:38.971238 env[1164]: time="2024-02-13T05:12:38.971220009Z" level=info msg="CreateContainer within sandbox \"af81910bc562e18ed1fbb04c9c71a3b8dfb2d717ad2e41c64dc5ac3484ebb784\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"11cafb293a71a456193b62a73875fb19d4252b0fcac2356cb5951c6759267409\"" Feb 13 05:12:38.971581 env[1164]: time="2024-02-13T05:12:38.971510238Z" level=info msg="StartContainer for \"11cafb293a71a456193b62a73875fb19d4252b0fcac2356cb5951c6759267409\"" Feb 13 05:12:38.980772 systemd[1]: Started cri-containerd-11cafb293a71a456193b62a73875fb19d4252b0fcac2356cb5951c6759267409.scope. Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.015859 kernel: kauditd_printk_skb: 34 callbacks suppressed Feb 13 05:12:39.015909 kernel: audit: type=1400 audit(1707801158.987:616): avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=7fbcd44e09f8 items=0 ppid=1708 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:39.177018 kernel: audit: type=1300 audit(1707801158.987:616): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=7fbcd44e09f8 items=0 ppid=1708 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:39.177048 kernel: audit: type=1327 audit(1707801158.987:616): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636166623239336137316134353631393362363261373338373566 Feb 13 05:12:38.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636166623239336137316134353631393362363261373338373566 Feb 13 05:12:39.270382 kernel: audit: type=1400 audit(1707801158.987:617): avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.270412 kernel: audit: type=1400 audit(1707801158.987:617): avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.270426 kernel: audit: type=1400 audit(1707801158.987:617): avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.525070 kernel: audit: type=1400 audit(1707801158.987:617): avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.525107 kernel: audit: type=1400 audit(1707801158.987:617): avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.589046 kernel: audit: type=1400 audit(1707801158.987:617): avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.594322 env[1164]: time="2024-02-13T05:12:39.594298740Z" level=info msg="StartContainer for \"11cafb293a71a456193b62a73875fb19d4252b0fcac2356cb5951c6759267409\" returns successfully" Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.717152 kernel: audit: type=1400 audit(1707801158.987:617): avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:38.987000 audit: BPF prog-id=73 op=LOAD Feb 13 05:12:38.987000 audit[2091]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001179d8 a2=78 a3=c000259fc8 items=0 ppid=1708 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:38.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636166623239336137316134353631393362363261373338373566 Feb 13 05:12:39.175000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.175000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.175000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.175000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.175000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.175000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.175000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.175000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.175000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.175000 audit: BPF prog-id=74 op=LOAD Feb 13 05:12:39.175000 audit[2091]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000117770 a2=78 a3=c0003c8018 items=0 ppid=1708 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:39.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636166623239336137316134353631393362363261373338373566 Feb 13 05:12:39.397000 audit: BPF prog-id=74 op=UNLOAD Feb 13 05:12:39.397000 audit: BPF prog-id=73 op=UNLOAD Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { perfmon } for pid=2091 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit[2091]: AVC avc: denied { bpf } for pid=2091 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 13 05:12:39.397000 audit: BPF prog-id=75 op=LOAD Feb 13 05:12:39.397000 audit[2091]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000117c30 a2=78 a3=c0003c80a8 items=0 ppid=1708 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:12:39.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131636166623239336137316134353631393362363261373338373566 Feb 13 05:12:39.746970 kubelet[1549]: E0213 05:12:39.746907 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:40.285736 env[1164]: time="2024-02-13T05:12:40.285584805Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 05:12:40.290713 systemd[1]: cri-containerd-11cafb293a71a456193b62a73875fb19d4252b0fcac2356cb5951c6759267409.scope: Deactivated successfully. Feb 13 05:12:40.300000 audit: BPF prog-id=75 op=UNLOAD Feb 13 05:12:40.318027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11cafb293a71a456193b62a73875fb19d4252b0fcac2356cb5951c6759267409-rootfs.mount: Deactivated successfully. Feb 13 05:12:40.385053 kubelet[1549]: I0213 05:12:40.385002 1549 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 05:12:40.748196 kubelet[1549]: E0213 05:12:40.748084 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:40.788119 env[1164]: time="2024-02-13T05:12:40.787987179Z" level=info msg="shim disconnected" id=11cafb293a71a456193b62a73875fb19d4252b0fcac2356cb5951c6759267409 Feb 13 05:12:40.788119 env[1164]: time="2024-02-13T05:12:40.788085173Z" level=warning msg="cleaning up after shim disconnected" id=11cafb293a71a456193b62a73875fb19d4252b0fcac2356cb5951c6759267409 namespace=k8s.io Feb 13 05:12:40.788119 env[1164]: time="2024-02-13T05:12:40.788112294Z" level=info msg="cleaning up dead shim" Feb 13 05:12:40.803047 env[1164]: time="2024-02-13T05:12:40.802964075Z" level=warning msg="cleanup warnings time=\"2024-02-13T05:12:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2155 runtime=io.containerd.runc.v2\n" Feb 13 05:12:40.896197 systemd[1]: Created slice kubepods-besteffort-podae936456_294f_41c6_9471_c93c49d5b396.slice. Feb 13 05:12:40.900829 env[1164]: time="2024-02-13T05:12:40.900707268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wz5l,Uid:ae936456-294f-41c6-9471-c93c49d5b396,Namespace:calico-system,Attempt:0,}" Feb 13 05:12:40.937398 env[1164]: time="2024-02-13T05:12:40.937298339Z" level=error msg="Failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:40.937621 env[1164]: time="2024-02-13T05:12:40.937569760Z" level=error msg="encountered an error cleaning up failed sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:40.937665 env[1164]: time="2024-02-13T05:12:40.937613176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wz5l,Uid:ae936456-294f-41c6-9471-c93c49d5b396,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:40.937852 kubelet[1549]: E0213 05:12:40.937808 1549 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:40.937852 kubelet[1549]: E0213 05:12:40.937845 1549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wz5l" Feb 13 05:12:40.937927 kubelet[1549]: E0213 05:12:40.937861 1549 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wz5l" Feb 13 05:12:40.937927 kubelet[1549]: E0213 05:12:40.937896 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2wz5l_calico-system(ae936456-294f-41c6-9471-c93c49d5b396)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2wz5l_calico-system(ae936456-294f-41c6-9471-c93c49d5b396)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:40.938260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143-shm.mount: Deactivated successfully. Feb 13 05:12:40.963750 kubelet[1549]: I0213 05:12:40.963704 1549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:12:40.963878 env[1164]: time="2024-02-13T05:12:40.963803112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 13 05:12:40.964118 env[1164]: time="2024-02-13T05:12:40.964073173Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:12:40.980461 env[1164]: time="2024-02-13T05:12:40.980397215Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:40.980590 kubelet[1549]: E0213 05:12:40.980543 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:12:40.980590 kubelet[1549]: E0213 05:12:40.980572 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:12:40.980590 kubelet[1549]: E0213 05:12:40.980592 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:12:40.980700 kubelet[1549]: E0213 05:12:40.980610 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:41.749124 kubelet[1549]: E0213 05:12:41.749016 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:42.749432 kubelet[1549]: E0213 05:12:42.749375 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:43.750499 kubelet[1549]: E0213 05:12:43.750398 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:44.751198 kubelet[1549]: E0213 05:12:44.751090 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:45.452740 kubelet[1549]: I0213 05:12:45.452683 1549 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:12:45.465922 systemd[1]: Created slice kubepods-besteffort-pod7ce10029_50d6_400a_921e_7fefb7347d49.slice. Feb 13 05:12:45.530590 kubelet[1549]: I0213 05:12:45.530476 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v66dq\" (UniqueName: \"kubernetes.io/projected/7ce10029-50d6-400a-921e-7fefb7347d49-kube-api-access-v66dq\") pod \"nginx-deployment-845c78c8b9-w5865\" (UID: \"7ce10029-50d6-400a-921e-7fefb7347d49\") " pod="default/nginx-deployment-845c78c8b9-w5865" Feb 13 05:12:45.723293 kubelet[1549]: E0213 05:12:45.723063 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:45.752183 kubelet[1549]: E0213 05:12:45.752069 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:45.771878 env[1164]: time="2024-02-13T05:12:45.771792024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-w5865,Uid:7ce10029-50d6-400a-921e-7fefb7347d49,Namespace:default,Attempt:0,}" Feb 13 05:12:45.808917 env[1164]: time="2024-02-13T05:12:45.808887082Z" level=error msg="Failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:45.809082 env[1164]: time="2024-02-13T05:12:45.809067670Z" level=error msg="encountered an error cleaning up failed sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:45.809112 env[1164]: time="2024-02-13T05:12:45.809093768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-w5865,Uid:7ce10029-50d6-400a-921e-7fefb7347d49,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:45.809244 kubelet[1549]: E0213 05:12:45.809232 1549 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:45.809281 kubelet[1549]: E0213 05:12:45.809271 1549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-845c78c8b9-w5865" Feb 13 05:12:45.809304 kubelet[1549]: E0213 05:12:45.809291 1549 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-845c78c8b9-w5865" Feb 13 05:12:45.809411 kubelet[1549]: E0213 05:12:45.809339 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-845c78c8b9-w5865_default(7ce10029-50d6-400a-921e-7fefb7347d49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-845c78c8b9-w5865_default(7ce10029-50d6-400a-921e-7fefb7347d49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:12:45.809814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5-shm.mount: Deactivated successfully. Feb 13 05:12:45.979971 kubelet[1549]: I0213 05:12:45.979776 1549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:12:45.980917 env[1164]: time="2024-02-13T05:12:45.980827575Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:12:46.033423 env[1164]: time="2024-02-13T05:12:46.033361819Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:46.033698 kubelet[1549]: E0213 05:12:46.033647 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:12:46.033698 kubelet[1549]: E0213 05:12:46.033698 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:12:46.033847 kubelet[1549]: E0213 05:12:46.033745 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:12:46.033847 kubelet[1549]: E0213 05:12:46.033785 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:12:46.752904 kubelet[1549]: E0213 05:12:46.752799 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:47.753097 kubelet[1549]: E0213 05:12:47.752991 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:48.753589 kubelet[1549]: E0213 05:12:48.753470 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:49.753960 kubelet[1549]: E0213 05:12:49.753841 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:50.754728 kubelet[1549]: E0213 05:12:50.754614 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:51.754916 kubelet[1549]: E0213 05:12:51.754804 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:52.756001 kubelet[1549]: E0213 05:12:52.755893 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:53.757265 kubelet[1549]: E0213 05:12:53.757155 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:53.884682 env[1164]: time="2024-02-13T05:12:53.884588544Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:12:53.914080 env[1164]: time="2024-02-13T05:12:53.914045359Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:53.914256 kubelet[1549]: E0213 05:12:53.914245 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:12:53.914291 kubelet[1549]: E0213 05:12:53.914271 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:12:53.914314 kubelet[1549]: E0213 05:12:53.914293 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:12:53.914314 kubelet[1549]: E0213 05:12:53.914310 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:12:54.757571 kubelet[1549]: E0213 05:12:54.757456 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:55.758272 kubelet[1549]: E0213 05:12:55.758156 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:56.758924 kubelet[1549]: E0213 05:12:56.758816 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:57.759796 kubelet[1549]: E0213 05:12:57.759689 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:57.885168 env[1164]: time="2024-02-13T05:12:57.885041642Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:12:57.910822 env[1164]: time="2024-02-13T05:12:57.910788691Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:12:57.911073 kubelet[1549]: E0213 05:12:57.911023 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:12:57.911114 kubelet[1549]: E0213 05:12:57.911089 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:12:57.911114 kubelet[1549]: E0213 05:12:57.911109 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:12:57.911175 kubelet[1549]: E0213 05:12:57.911125 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:12:58.760993 kubelet[1549]: E0213 05:12:58.760885 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:12:59.762048 kubelet[1549]: E0213 05:12:59.761936 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:00.762721 kubelet[1549]: E0213 05:13:00.762609 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:01.763918 kubelet[1549]: E0213 05:13:01.763810 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:02.764518 kubelet[1549]: E0213 05:13:02.764406 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:03.765645 kubelet[1549]: E0213 05:13:03.765529 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:04.766879 kubelet[1549]: E0213 05:13:04.766802 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:05.723549 kubelet[1549]: E0213 05:13:05.723439 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:05.767899 kubelet[1549]: E0213 05:13:05.767793 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:05.884673 env[1164]: time="2024-02-13T05:13:05.884578781Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:13:05.908199 env[1164]: time="2024-02-13T05:13:05.908137324Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:13:05.908297 kubelet[1549]: E0213 05:13:05.908283 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:13:05.908338 kubelet[1549]: E0213 05:13:05.908310 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:13:05.908363 kubelet[1549]: E0213 05:13:05.908337 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:13:05.908363 kubelet[1549]: E0213 05:13:05.908355 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:13:06.768206 kubelet[1549]: E0213 05:13:06.768134 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:07.768670 kubelet[1549]: E0213 05:13:07.768561 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:08.769726 kubelet[1549]: E0213 05:13:08.769644 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:08.884938 env[1164]: time="2024-02-13T05:13:08.884835715Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:13:08.910667 env[1164]: time="2024-02-13T05:13:08.910604883Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:13:08.910813 kubelet[1549]: E0213 05:13:08.910781 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:13:08.910813 kubelet[1549]: E0213 05:13:08.910807 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:13:08.910880 kubelet[1549]: E0213 05:13:08.910830 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:13:08.910880 kubelet[1549]: E0213 05:13:08.910848 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:13:09.758822 systemd[1]: Started sshd@7-147.75.90.7:22-85.209.11.227:24221.service. Feb 13 05:13:09.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-147.75.90.7:22-85.209.11.227:24221 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:13:09.770482 kubelet[1549]: E0213 05:13:09.770384 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:09.786253 kernel: kauditd_printk_skb: 34 callbacks suppressed Feb 13 05:13:09.786293 kernel: audit: type=1130 audit(1707801189.758:623): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-147.75.90.7:22-85.209.11.227:24221 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:13:09.939195 sshd[2443]: kex_exchange_identification: read: Connection reset by peer Feb 13 05:13:09.939195 sshd[2443]: Connection reset by 85.209.11.227 port 24221 Feb 13 05:13:09.939612 systemd[1]: sshd@7-147.75.90.7:22-85.209.11.227:24221.service: Deactivated successfully. Feb 13 05:13:09.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-147.75.90.7:22-85.209.11.227:24221 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:13:10.029526 kernel: audit: type=1131 audit(1707801189.939:624): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-147.75.90.7:22-85.209.11.227:24221 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:13:10.770781 kubelet[1549]: E0213 05:13:10.770667 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:11.771970 kubelet[1549]: E0213 05:13:11.771863 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:12.772205 kubelet[1549]: E0213 05:13:12.772093 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:13.773397 kubelet[1549]: E0213 05:13:13.773286 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:14.774290 kubelet[1549]: E0213 05:13:14.774182 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:15.775055 kubelet[1549]: E0213 05:13:15.774943 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:16.775969 kubelet[1549]: E0213 05:13:16.775859 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:17.776494 kubelet[1549]: E0213 05:13:17.776374 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:18.777153 kubelet[1549]: E0213 05:13:18.777045 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:19.777941 kubelet[1549]: E0213 05:13:19.777826 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:19.884270 env[1164]: time="2024-02-13T05:13:19.884130933Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:13:19.884270 env[1164]: time="2024-02-13T05:13:19.884158812Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:13:19.909653 env[1164]: time="2024-02-13T05:13:19.909585324Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:13:19.909751 kubelet[1549]: E0213 05:13:19.909742 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:13:19.909786 kubelet[1549]: E0213 05:13:19.909767 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:13:19.909806 env[1164]: time="2024-02-13T05:13:19.909749899Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:13:19.909827 kubelet[1549]: E0213 05:13:19.909792 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:13:19.909827 kubelet[1549]: E0213 05:13:19.909810 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:13:19.909907 kubelet[1549]: E0213 05:13:19.909825 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:13:19.909907 kubelet[1549]: E0213 05:13:19.909840 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:13:19.909907 kubelet[1549]: E0213 05:13:19.909858 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:13:19.909907 kubelet[1549]: E0213 05:13:19.909873 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:13:20.778710 kubelet[1549]: E0213 05:13:20.778599 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:21.779494 kubelet[1549]: E0213 05:13:21.779372 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:22.780328 kubelet[1549]: E0213 05:13:22.780216 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:23.781256 kubelet[1549]: E0213 05:13:23.781147 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:24.782135 kubelet[1549]: E0213 05:13:24.782025 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:25.723620 kubelet[1549]: E0213 05:13:25.723515 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:25.782617 kubelet[1549]: E0213 05:13:25.782506 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:26.783202 kubelet[1549]: E0213 05:13:26.783097 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:27.784227 kubelet[1549]: E0213 05:13:27.784117 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:28.784774 kubelet[1549]: E0213 05:13:28.784667 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:29.785408 kubelet[1549]: E0213 05:13:29.785255 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:30.786111 kubelet[1549]: E0213 05:13:30.786041 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:31.787056 kubelet[1549]: E0213 05:13:31.786950 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:32.788272 kubelet[1549]: E0213 05:13:32.788161 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:33.789136 kubelet[1549]: E0213 05:13:33.789022 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:33.885019 env[1164]: time="2024-02-13T05:13:33.884934927Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:13:33.885847 env[1164]: time="2024-02-13T05:13:33.885237399Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:13:33.910486 env[1164]: time="2024-02-13T05:13:33.910433306Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:13:33.910613 env[1164]: time="2024-02-13T05:13:33.910528600Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:13:33.910666 kubelet[1549]: E0213 05:13:33.910655 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:13:33.910699 kubelet[1549]: E0213 05:13:33.910682 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:13:33.910719 kubelet[1549]: E0213 05:13:33.910702 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:13:33.910791 kubelet[1549]: E0213 05:13:33.910656 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:13:33.910791 kubelet[1549]: E0213 05:13:33.910747 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:13:33.910791 kubelet[1549]: E0213 05:13:33.910755 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:13:33.910791 kubelet[1549]: E0213 05:13:33.910789 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:13:33.910895 kubelet[1549]: E0213 05:13:33.910803 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:13:34.789830 kubelet[1549]: E0213 05:13:34.789720 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:35.790857 kubelet[1549]: E0213 05:13:35.790742 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:36.791478 kubelet[1549]: E0213 05:13:36.791402 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:37.791724 kubelet[1549]: E0213 05:13:37.791655 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:38.792447 kubelet[1549]: E0213 05:13:38.792321 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:39.792824 kubelet[1549]: E0213 05:13:39.792706 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:40.793398 kubelet[1549]: E0213 05:13:40.793289 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:41.794307 kubelet[1549]: E0213 05:13:41.794198 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:42.794473 kubelet[1549]: E0213 05:13:42.794361 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:43.794692 kubelet[1549]: E0213 05:13:43.794592 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:44.794880 kubelet[1549]: E0213 05:13:44.794735 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:44.885233 env[1164]: time="2024-02-13T05:13:44.885134762Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:13:44.914090 env[1164]: time="2024-02-13T05:13:44.913980255Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:13:44.914250 kubelet[1549]: E0213 05:13:44.914240 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:13:44.914281 kubelet[1549]: E0213 05:13:44.914265 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:13:44.914304 kubelet[1549]: E0213 05:13:44.914288 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:13:44.914372 kubelet[1549]: E0213 05:13:44.914304 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:13:45.723599 kubelet[1549]: E0213 05:13:45.723483 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:45.795314 kubelet[1549]: E0213 05:13:45.795202 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:46.796208 kubelet[1549]: E0213 05:13:46.796097 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:47.796766 kubelet[1549]: E0213 05:13:47.796658 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:48.796936 kubelet[1549]: E0213 05:13:48.796827 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:48.885006 env[1164]: time="2024-02-13T05:13:48.884884434Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:13:48.910589 env[1164]: time="2024-02-13T05:13:48.910528540Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:13:48.910736 kubelet[1549]: E0213 05:13:48.910700 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:13:48.910736 kubelet[1549]: E0213 05:13:48.910725 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:13:48.910801 kubelet[1549]: E0213 05:13:48.910747 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:13:48.910801 kubelet[1549]: E0213 05:13:48.910768 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:13:49.797935 kubelet[1549]: E0213 05:13:49.797812 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:50.798208 kubelet[1549]: E0213 05:13:50.798100 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:51.743907 systemd[1]: Started sshd@8-147.75.90.7:22-167.94.146.55:52042.service. Feb 13 05:13:51.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-147.75.90.7:22-167.94.146.55:52042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:13:51.798521 kubelet[1549]: E0213 05:13:51.798469 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:51.834339 kernel: audit: type=1130 audit(1707801231.742:625): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-147.75.90.7:22-167.94.146.55:52042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:13:52.627122 systemd[1]: Started sshd@9-147.75.90.7:22-104.250.49.231:56401.service. Feb 13 05:13:52.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-147.75.90.7:22-104.250.49.231:56401 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:13:52.719530 kernel: audit: type=1130 audit(1707801232.625:626): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-147.75.90.7:22-104.250.49.231:56401 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:13:52.799668 kubelet[1549]: E0213 05:13:52.799602 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:53.800001 kubelet[1549]: E0213 05:13:53.799888 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:54.800762 kubelet[1549]: E0213 05:13:54.800648 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:55.801385 kubelet[1549]: E0213 05:13:55.801224 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:56.801618 kubelet[1549]: E0213 05:13:56.801508 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:57.802289 kubelet[1549]: E0213 05:13:57.802180 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:58.803409 kubelet[1549]: E0213 05:13:58.803297 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:13:58.885043 env[1164]: time="2024-02-13T05:13:58.884942401Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:13:58.914170 env[1164]: time="2024-02-13T05:13:58.914059182Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:13:58.914323 kubelet[1549]: E0213 05:13:58.914313 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:13:58.914397 kubelet[1549]: E0213 05:13:58.914346 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:13:58.914423 kubelet[1549]: E0213 05:13:58.914401 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:13:58.914423 kubelet[1549]: E0213 05:13:58.914419 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:13:59.804066 kubelet[1549]: E0213 05:13:59.803988 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:00.804259 kubelet[1549]: E0213 05:14:00.804179 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:01.804641 kubelet[1549]: E0213 05:14:01.804528 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:01.884862 env[1164]: time="2024-02-13T05:14:01.884772594Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:14:01.910333 env[1164]: time="2024-02-13T05:14:01.910296934Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:01.910504 kubelet[1549]: E0213 05:14:01.910474 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:14:01.910504 kubelet[1549]: E0213 05:14:01.910503 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:14:01.910570 kubelet[1549]: E0213 05:14:01.910525 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:01.910570 kubelet[1549]: E0213 05:14:01.910542 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:14:02.805002 kubelet[1549]: E0213 05:14:02.804927 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:03.805625 kubelet[1549]: E0213 05:14:03.805551 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:04.792390 sshd[2625]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.250.49.231 user=root Feb 13 05:14:04.791000 audit[2625]: USER_AUTH pid=2625 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=104.250.49.231 addr=104.250.49.231 terminal=ssh res=failed' Feb 13 05:14:04.806348 kubelet[1549]: E0213 05:14:04.806311 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:04.884619 kernel: audit: type=1100 audit(1707801244.791:627): pid=2625 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=104.250.49.231 addr=104.250.49.231 terminal=ssh res=failed' Feb 13 05:14:05.723067 kubelet[1549]: E0213 05:14:05.722989 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:05.806495 kubelet[1549]: E0213 05:14:05.806425 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:06.715102 sshd[2625]: Failed password for root from 104.250.49.231 port 56401 ssh2 Feb 13 05:14:06.740001 sshd[2622]: Connection closed by 167.94.146.55 port 52042 [preauth] Feb 13 05:14:06.741871 systemd[1]: sshd@8-147.75.90.7:22-167.94.146.55:52042.service: Deactivated successfully. Feb 13 05:14:06.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-147.75.90.7:22-167.94.146.55:52042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:14:06.806899 kubelet[1549]: E0213 05:14:06.806854 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:06.834541 kernel: audit: type=1131 audit(1707801246.741:628): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-147.75.90.7:22-167.94.146.55:52042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:14:07.807174 kubelet[1549]: E0213 05:14:07.807066 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:08.808225 kubelet[1549]: E0213 05:14:08.808106 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:09.809301 kubelet[1549]: E0213 05:14:09.809191 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:09.884398 env[1164]: time="2024-02-13T05:14:09.884250784Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:14:09.912100 env[1164]: time="2024-02-13T05:14:09.912065221Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:09.912270 kubelet[1549]: E0213 05:14:09.912260 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:14:09.912304 kubelet[1549]: E0213 05:14:09.912287 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:14:09.912325 kubelet[1549]: E0213 05:14:09.912309 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:09.912398 kubelet[1549]: E0213 05:14:09.912325 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:14:10.260779 sshd[2625]: Received disconnect from 104.250.49.231 port 56401:11: Bye Bye [preauth] Feb 13 05:14:10.260779 sshd[2625]: Disconnected from authenticating user root 104.250.49.231 port 56401 [preauth] Feb 13 05:14:10.263315 systemd[1]: sshd@9-147.75.90.7:22-104.250.49.231:56401.service: Deactivated successfully. Feb 13 05:14:10.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-147.75.90.7:22-104.250.49.231:56401 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:14:10.355508 kernel: audit: type=1131 audit(1707801250.262:629): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-147.75.90.7:22-104.250.49.231:56401 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:14:10.809730 kubelet[1549]: E0213 05:14:10.809624 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:11.810147 kubelet[1549]: E0213 05:14:11.810043 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:12.810456 kubelet[1549]: E0213 05:14:12.810322 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:12.884844 env[1164]: time="2024-02-13T05:14:12.884700628Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:14:12.910554 env[1164]: time="2024-02-13T05:14:12.910487752Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:12.910712 kubelet[1549]: E0213 05:14:12.910659 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:14:12.910712 kubelet[1549]: E0213 05:14:12.910684 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:14:12.910712 kubelet[1549]: E0213 05:14:12.910707 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:12.910818 kubelet[1549]: E0213 05:14:12.910725 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:14:13.811612 kubelet[1549]: E0213 05:14:13.811496 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:14.812751 kubelet[1549]: E0213 05:14:14.812641 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:15.813864 kubelet[1549]: E0213 05:14:15.813754 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:16.814836 kubelet[1549]: E0213 05:14:16.814718 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:17.815219 kubelet[1549]: E0213 05:14:17.815107 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:18.816010 kubelet[1549]: E0213 05:14:18.815894 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:19.816576 kubelet[1549]: E0213 05:14:19.816461 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:20.817784 kubelet[1549]: E0213 05:14:20.817671 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:21.818746 kubelet[1549]: E0213 05:14:21.818635 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:21.884783 env[1164]: time="2024-02-13T05:14:21.884689801Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:14:21.910618 env[1164]: time="2024-02-13T05:14:21.910558998Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:21.910768 kubelet[1549]: E0213 05:14:21.910707 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:14:21.910768 kubelet[1549]: E0213 05:14:21.910733 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:14:21.910768 kubelet[1549]: E0213 05:14:21.910753 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:21.910873 kubelet[1549]: E0213 05:14:21.910771 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:14:22.819741 kubelet[1549]: E0213 05:14:22.819632 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:23.820852 kubelet[1549]: E0213 05:14:23.820745 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:23.884400 env[1164]: time="2024-02-13T05:14:23.884292340Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:14:23.899566 env[1164]: time="2024-02-13T05:14:23.899504906Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:23.899664 kubelet[1549]: E0213 05:14:23.899653 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:14:23.899699 kubelet[1549]: E0213 05:14:23.899678 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:14:23.899719 kubelet[1549]: E0213 05:14:23.899703 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:23.899765 kubelet[1549]: E0213 05:14:23.899722 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:14:24.822011 kubelet[1549]: E0213 05:14:24.821898 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:25.723604 kubelet[1549]: E0213 05:14:25.723490 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:25.822683 kubelet[1549]: E0213 05:14:25.822568 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:26.823915 kubelet[1549]: E0213 05:14:26.823809 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:27.824710 kubelet[1549]: E0213 05:14:27.824601 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:28.825139 kubelet[1549]: E0213 05:14:28.825022 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:29.825520 kubelet[1549]: E0213 05:14:29.825412 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:30.826255 kubelet[1549]: E0213 05:14:30.826147 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:31.827457 kubelet[1549]: E0213 05:14:31.827321 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:32.827991 kubelet[1549]: E0213 05:14:32.827885 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:33.828195 kubelet[1549]: E0213 05:14:33.828052 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:34.828481 kubelet[1549]: E0213 05:14:34.828367 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:34.885182 env[1164]: time="2024-02-13T05:14:34.885051440Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:14:34.911423 env[1164]: time="2024-02-13T05:14:34.911314966Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:34.911569 kubelet[1549]: E0213 05:14:34.911528 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:14:34.911569 kubelet[1549]: E0213 05:14:34.911553 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:14:34.911631 kubelet[1549]: E0213 05:14:34.911574 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:34.911631 kubelet[1549]: E0213 05:14:34.911592 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:14:35.829353 kubelet[1549]: E0213 05:14:35.829254 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:36.830704 kubelet[1549]: E0213 05:14:36.830621 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:36.885207 env[1164]: time="2024-02-13T05:14:36.885114173Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:14:36.910812 env[1164]: time="2024-02-13T05:14:36.910778750Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:36.910967 kubelet[1549]: E0213 05:14:36.910927 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:14:36.910967 kubelet[1549]: E0213 05:14:36.910954 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:14:36.911027 kubelet[1549]: E0213 05:14:36.910978 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:36.911027 kubelet[1549]: E0213 05:14:36.910996 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:14:37.831043 kubelet[1549]: E0213 05:14:37.830932 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:38.831352 kubelet[1549]: E0213 05:14:38.831226 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:39.832509 kubelet[1549]: E0213 05:14:39.832400 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:40.833762 kubelet[1549]: E0213 05:14:40.833649 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:41.834731 kubelet[1549]: E0213 05:14:41.834621 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:42.835133 kubelet[1549]: E0213 05:14:42.835020 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:43.836203 kubelet[1549]: E0213 05:14:43.836094 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:44.837456 kubelet[1549]: E0213 05:14:44.837350 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:45.723373 kubelet[1549]: E0213 05:14:45.723250 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:45.837722 kubelet[1549]: E0213 05:14:45.837611 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:46.838308 kubelet[1549]: E0213 05:14:46.838201 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:46.884923 env[1164]: time="2024-02-13T05:14:46.884801272Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:14:46.911157 env[1164]: time="2024-02-13T05:14:46.911125646Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:46.911308 kubelet[1549]: E0213 05:14:46.911299 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:14:46.911351 kubelet[1549]: E0213 05:14:46.911323 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:14:46.911412 kubelet[1549]: E0213 05:14:46.911351 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:46.911412 kubelet[1549]: E0213 05:14:46.911406 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:14:47.838999 kubelet[1549]: E0213 05:14:47.838882 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:48.839942 kubelet[1549]: E0213 05:14:48.839826 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:48.885271 env[1164]: time="2024-02-13T05:14:48.885145335Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:14:48.911567 env[1164]: time="2024-02-13T05:14:48.911536818Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:48.911743 kubelet[1549]: E0213 05:14:48.911719 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:14:48.911776 kubelet[1549]: E0213 05:14:48.911757 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:14:48.911796 kubelet[1549]: E0213 05:14:48.911777 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:48.911796 kubelet[1549]: E0213 05:14:48.911793 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:14:49.840402 kubelet[1549]: E0213 05:14:49.840291 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:50.840633 kubelet[1549]: E0213 05:14:50.840518 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:51.841725 kubelet[1549]: E0213 05:14:51.841618 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:52.842630 kubelet[1549]: E0213 05:14:52.842520 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:53.843815 kubelet[1549]: E0213 05:14:53.843706 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:54.844487 kubelet[1549]: E0213 05:14:54.844374 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:55.845605 kubelet[1549]: E0213 05:14:55.845497 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:56.846773 kubelet[1549]: E0213 05:14:56.846660 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:57.847425 kubelet[1549]: E0213 05:14:57.847301 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:57.884395 env[1164]: time="2024-02-13T05:14:57.884267876Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:14:57.898506 env[1164]: time="2024-02-13T05:14:57.898470586Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:14:57.898674 kubelet[1549]: E0213 05:14:57.898629 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:14:57.898674 kubelet[1549]: E0213 05:14:57.898652 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:14:57.898674 kubelet[1549]: E0213 05:14:57.898675 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:14:57.898781 kubelet[1549]: E0213 05:14:57.898693 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:14:58.847675 kubelet[1549]: E0213 05:14:58.847565 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:14:59.848004 kubelet[1549]: E0213 05:14:59.847892 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:00.848709 kubelet[1549]: E0213 05:15:00.848599 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:01.849620 kubelet[1549]: E0213 05:15:01.849508 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:02.850109 kubelet[1549]: E0213 05:15:02.849996 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:02.884912 env[1164]: time="2024-02-13T05:15:02.884825623Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:15:02.912014 env[1164]: time="2024-02-13T05:15:02.911977381Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:02.912259 kubelet[1549]: E0213 05:15:02.912248 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:15:02.912293 kubelet[1549]: E0213 05:15:02.912273 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:15:02.912314 kubelet[1549]: E0213 05:15:02.912294 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:02.912314 kubelet[1549]: E0213 05:15:02.912312 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:15:03.851318 kubelet[1549]: E0213 05:15:03.851202 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:04.852392 kubelet[1549]: E0213 05:15:04.852277 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:05.723559 kubelet[1549]: E0213 05:15:05.723445 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:05.852908 kubelet[1549]: E0213 05:15:05.852801 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:06.854164 kubelet[1549]: E0213 05:15:06.854055 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:07.854630 kubelet[1549]: E0213 05:15:07.854513 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:08.855839 kubelet[1549]: E0213 05:15:08.855726 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:09.856697 kubelet[1549]: E0213 05:15:09.856591 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:10.857655 kubelet[1549]: E0213 05:15:10.857547 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:11.857928 kubelet[1549]: E0213 05:15:11.857819 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:12.858533 kubelet[1549]: E0213 05:15:12.858458 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:12.866887 update_engine[1156]: I0213 05:15:12.866776 1156 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 05:15:12.866887 update_engine[1156]: I0213 05:15:12.866854 1156 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 05:15:12.867763 update_engine[1156]: I0213 05:15:12.867587 1156 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 05:15:12.868513 update_engine[1156]: I0213 05:15:12.868441 1156 omaha_request_params.cc:62] Current group set to lts Feb 13 05:15:12.868750 update_engine[1156]: I0213 05:15:12.868721 1156 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 05:15:12.868750 update_engine[1156]: I0213 05:15:12.868741 1156 update_attempter.cc:643] Scheduling an action processor start. Feb 13 05:15:12.869080 update_engine[1156]: I0213 05:15:12.868772 1156 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 05:15:12.869080 update_engine[1156]: I0213 05:15:12.868839 1156 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 05:15:12.869080 update_engine[1156]: I0213 05:15:12.868975 1156 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 05:15:12.869080 update_engine[1156]: I0213 05:15:12.868991 1156 omaha_request_action.cc:271] Request: Feb 13 05:15:12.869080 update_engine[1156]: Feb 13 05:15:12.869080 update_engine[1156]: Feb 13 05:15:12.869080 update_engine[1156]: Feb 13 05:15:12.869080 update_engine[1156]: Feb 13 05:15:12.869080 update_engine[1156]: Feb 13 05:15:12.869080 update_engine[1156]: Feb 13 05:15:12.869080 update_engine[1156]: Feb 13 05:15:12.869080 update_engine[1156]: Feb 13 05:15:12.869080 update_engine[1156]: I0213 05:15:12.869000 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:15:12.870628 locksmithd[1182]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 05:15:12.872082 update_engine[1156]: I0213 05:15:12.872007 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:15:12.872259 update_engine[1156]: E0213 05:15:12.872232 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:15:12.872444 update_engine[1156]: I0213 05:15:12.872416 1156 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 05:15:12.884516 env[1164]: time="2024-02-13T05:15:12.884414673Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:15:12.913935 env[1164]: time="2024-02-13T05:15:12.913877919Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:12.914178 kubelet[1549]: E0213 05:15:12.914139 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:15:12.914178 kubelet[1549]: E0213 05:15:12.914179 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:15:12.914240 kubelet[1549]: E0213 05:15:12.914200 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:12.914240 kubelet[1549]: E0213 05:15:12.914219 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:15:13.859055 kubelet[1549]: E0213 05:15:13.858983 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:14.860286 kubelet[1549]: E0213 05:15:14.860219 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:15.860465 kubelet[1549]: E0213 05:15:15.860395 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:15.884553 env[1164]: time="2024-02-13T05:15:15.884423646Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:15:15.899772 env[1164]: time="2024-02-13T05:15:15.899705450Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:15.899865 kubelet[1549]: E0213 05:15:15.899843 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:15:15.899902 kubelet[1549]: E0213 05:15:15.899868 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:15:15.899902 kubelet[1549]: E0213 05:15:15.899893 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:15.899968 kubelet[1549]: E0213 05:15:15.899911 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:15:16.861067 kubelet[1549]: E0213 05:15:16.860958 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:17.861437 kubelet[1549]: E0213 05:15:17.861303 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:18.862758 kubelet[1549]: E0213 05:15:18.862648 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:19.863206 kubelet[1549]: E0213 05:15:19.863089 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:20.864272 kubelet[1549]: E0213 05:15:20.864152 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:21.865127 kubelet[1549]: E0213 05:15:21.865018 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:22.824827 update_engine[1156]: I0213 05:15:22.824707 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:15:22.825754 update_engine[1156]: I0213 05:15:22.825171 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:15:22.825754 update_engine[1156]: E0213 05:15:22.825402 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:15:22.825754 update_engine[1156]: I0213 05:15:22.825573 1156 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 05:15:22.866006 kubelet[1549]: E0213 05:15:22.865893 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:23.867195 kubelet[1549]: E0213 05:15:23.867085 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:24.868106 kubelet[1549]: E0213 05:15:24.868001 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:24.884895 env[1164]: time="2024-02-13T05:15:24.884783772Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:15:24.913770 env[1164]: time="2024-02-13T05:15:24.913721789Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:24.913951 kubelet[1549]: E0213 05:15:24.913897 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:15:24.914008 kubelet[1549]: E0213 05:15:24.913971 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:15:24.914008 kubelet[1549]: E0213 05:15:24.913995 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:24.914070 kubelet[1549]: E0213 05:15:24.914011 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:15:25.723469 kubelet[1549]: E0213 05:15:25.723369 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:25.869267 kubelet[1549]: E0213 05:15:25.869161 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:26.870389 kubelet[1549]: E0213 05:15:26.870230 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:27.871189 kubelet[1549]: E0213 05:15:27.871080 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:28.872384 kubelet[1549]: E0213 05:15:28.872227 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:29.873069 kubelet[1549]: E0213 05:15:29.872959 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:30.873961 kubelet[1549]: E0213 05:15:30.873844 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:30.884589 env[1164]: time="2024-02-13T05:15:30.884445492Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:15:30.913719 env[1164]: time="2024-02-13T05:15:30.913684848Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:30.913916 kubelet[1549]: E0213 05:15:30.913871 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:15:30.913916 kubelet[1549]: E0213 05:15:30.913895 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:15:30.913916 kubelet[1549]: E0213 05:15:30.913918 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:30.914020 kubelet[1549]: E0213 05:15:30.913936 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:15:31.874862 kubelet[1549]: E0213 05:15:31.874751 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:32.825039 update_engine[1156]: I0213 05:15:32.824919 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:15:32.825875 update_engine[1156]: I0213 05:15:32.825412 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:15:32.825875 update_engine[1156]: E0213 05:15:32.825610 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:15:32.825875 update_engine[1156]: I0213 05:15:32.825779 1156 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 05:15:32.875180 kubelet[1549]: E0213 05:15:32.875070 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:33.876072 kubelet[1549]: E0213 05:15:33.875962 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:34.876906 kubelet[1549]: E0213 05:15:34.876794 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:35.877704 kubelet[1549]: E0213 05:15:35.877595 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:36.878883 kubelet[1549]: E0213 05:15:36.878809 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:37.879395 kubelet[1549]: E0213 05:15:37.879283 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:37.885052 env[1164]: time="2024-02-13T05:15:37.884972038Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:15:37.914216 env[1164]: time="2024-02-13T05:15:37.914125324Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:37.914426 kubelet[1549]: E0213 05:15:37.914370 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:15:37.914426 kubelet[1549]: E0213 05:15:37.914408 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:15:37.914502 kubelet[1549]: E0213 05:15:37.914430 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:37.914502 kubelet[1549]: E0213 05:15:37.914445 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:15:38.879577 kubelet[1549]: E0213 05:15:38.879491 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:39.880508 kubelet[1549]: E0213 05:15:39.880394 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:40.881004 kubelet[1549]: E0213 05:15:40.880894 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:41.881946 kubelet[1549]: E0213 05:15:41.881835 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:42.825250 update_engine[1156]: I0213 05:15:42.825121 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:15:42.826112 update_engine[1156]: I0213 05:15:42.825625 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:15:42.826112 update_engine[1156]: E0213 05:15:42.825832 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:15:42.826112 update_engine[1156]: I0213 05:15:42.825982 1156 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 05:15:42.826112 update_engine[1156]: I0213 05:15:42.825997 1156 omaha_request_action.cc:621] Omaha request response: Feb 13 05:15:42.826564 update_engine[1156]: E0213 05:15:42.826140 1156 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826167 1156 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826176 1156 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826184 1156 update_attempter.cc:306] Processing Done. Feb 13 05:15:42.826564 update_engine[1156]: E0213 05:15:42.826210 1156 update_attempter.cc:619] Update failed. Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826220 1156 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826228 1156 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826237 1156 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826407 1156 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826458 1156 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826467 1156 omaha_request_action.cc:271] Request: Feb 13 05:15:42.826564 update_engine[1156]: Feb 13 05:15:42.826564 update_engine[1156]: Feb 13 05:15:42.826564 update_engine[1156]: Feb 13 05:15:42.826564 update_engine[1156]: Feb 13 05:15:42.826564 update_engine[1156]: Feb 13 05:15:42.826564 update_engine[1156]: Feb 13 05:15:42.826564 update_engine[1156]: I0213 05:15:42.826479 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 05:15:42.828191 update_engine[1156]: I0213 05:15:42.826787 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 05:15:42.828191 update_engine[1156]: E0213 05:15:42.826951 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 05:15:42.828191 update_engine[1156]: I0213 05:15:42.827083 1156 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 05:15:42.828191 update_engine[1156]: I0213 05:15:42.827096 1156 omaha_request_action.cc:621] Omaha request response: Feb 13 05:15:42.828191 update_engine[1156]: I0213 05:15:42.827106 1156 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 05:15:42.828191 update_engine[1156]: I0213 05:15:42.827114 1156 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 05:15:42.828191 update_engine[1156]: I0213 05:15:42.827122 1156 update_attempter.cc:306] Processing Done. Feb 13 05:15:42.828191 update_engine[1156]: I0213 05:15:42.827129 1156 update_attempter.cc:310] Error event sent. Feb 13 05:15:42.828191 update_engine[1156]: I0213 05:15:42.827150 1156 update_check_scheduler.cc:74] Next update check in 42m39s Feb 13 05:15:42.829007 locksmithd[1182]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 05:15:42.829007 locksmithd[1182]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 05:15:42.883128 kubelet[1549]: E0213 05:15:42.883019 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:43.884031 kubelet[1549]: E0213 05:15:43.883927 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:44.884128 kubelet[1549]: E0213 05:15:44.884056 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:44.885213 env[1164]: time="2024-02-13T05:15:44.884123041Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:15:44.911472 env[1164]: time="2024-02-13T05:15:44.911440276Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:44.911639 kubelet[1549]: E0213 05:15:44.911597 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:15:44.911639 kubelet[1549]: E0213 05:15:44.911622 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:15:44.911707 kubelet[1549]: E0213 05:15:44.911644 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:44.911707 kubelet[1549]: E0213 05:15:44.911662 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:15:45.723615 kubelet[1549]: E0213 05:15:45.723496 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:45.885101 kubelet[1549]: E0213 05:15:45.884988 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:46.886062 kubelet[1549]: E0213 05:15:46.885934 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:47.886273 kubelet[1549]: E0213 05:15:47.886169 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:48.886533 kubelet[1549]: E0213 05:15:48.886424 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:49.886790 kubelet[1549]: E0213 05:15:49.886689 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:50.887865 kubelet[1549]: E0213 05:15:50.887754 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:51.503000 audit[3157]: NETFILTER_CFG table=filter:67 family=2 entries=20 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:15:51.509389 kubelet[1549]: I0213 05:15:51.509360 1549 topology_manager.go:212] "Topology Admit Handler" Feb 13 05:15:51.512562 systemd[1]: Created slice kubepods-besteffort-pod23101915_4b4b_4401_b70b_f544177cac44.slice. Feb 13 05:15:51.503000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=11292 a0=3 a1=7ffe6a2e8270 a2=0 a3=7ffe6a2e825c items=0 ppid=1781 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:15:51.632993 kubelet[1549]: I0213 05:15:51.632956 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/23101915-4b4b-4401-b70b-f544177cac44-data\") pod \"nfs-server-provisioner-0\" (UID: \"23101915-4b4b-4401-b70b-f544177cac44\") " pod="default/nfs-server-provisioner-0" Feb 13 05:15:51.632993 kubelet[1549]: I0213 05:15:51.632976 1549 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gvln\" (UniqueName: \"kubernetes.io/projected/23101915-4b4b-4401-b70b-f544177cac44-kube-api-access-4gvln\") pod \"nfs-server-provisioner-0\" (UID: \"23101915-4b4b-4401-b70b-f544177cac44\") " pod="default/nfs-server-provisioner-0" Feb 13 05:15:51.664220 kernel: audit: type=1325 audit(1707801351.503:630): table=filter:67 family=2 entries=20 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:15:51.664256 kernel: audit: type=1300 audit(1707801351.503:630): arch=c000003e syscall=46 success=yes exit=11292 a0=3 a1=7ffe6a2e8270 a2=0 a3=7ffe6a2e825c items=0 ppid=1781 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:15:51.664272 kernel: audit: type=1327 audit(1707801351.503:630): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:15:51.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:15:51.727000 audit[3157]: NETFILTER_CFG table=nat:68 family=2 entries=22 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:15:51.788397 kernel: audit: type=1325 audit(1707801351.727:631): table=nat:68 family=2 entries=22 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:15:51.788430 kernel: audit: type=1300 audit(1707801351.727:631): arch=c000003e syscall=46 success=yes exit=6212 a0=3 a1=7ffe6a2e8270 a2=0 a3=31030 items=0 ppid=1781 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:15:51.727000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=6212 a0=3 a1=7ffe6a2e8270 a2=0 a3=31030 items=0 ppid=1781 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:15:51.813764 env[1164]: time="2024-02-13T05:15:51.813743745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:23101915-4b4b-4401-b70b-f544177cac44,Namespace:default,Attempt:0,}" Feb 13 05:15:51.883756 env[1164]: time="2024-02-13T05:15:51.883670838Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:15:51.888019 kubelet[1549]: E0213 05:15:51.887991 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:51.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:15:51.948361 kernel: audit: type=1327 audit(1707801351.727:631): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:15:51.955000 audit[3230]: NETFILTER_CFG table=filter:69 family=2 entries=32 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:15:51.959503 env[1164]: time="2024-02-13T05:15:51.959443020Z" level=error msg="Failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:51.959809 env[1164]: time="2024-02-13T05:15:51.959786171Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:51.959959 kubelet[1549]: E0213 05:15:51.959947 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:15:51.960013 kubelet[1549]: E0213 05:15:51.959971 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:15:51.960013 kubelet[1549]: E0213 05:15:51.959995 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:51.960013 kubelet[1549]: E0213 05:15:51.960012 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:15:51.960425 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7-shm.mount: Deactivated successfully. Feb 13 05:15:51.955000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=11292 a0=3 a1=7ffcf288ce90 a2=0 a3=7ffcf288ce7c items=0 ppid=1781 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:15:52.017050 env[1164]: time="2024-02-13T05:15:52.017004615Z" level=error msg="encountered an error cleaning up failed sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:52.017050 env[1164]: time="2024-02-13T05:15:52.017033489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:23101915-4b4b-4401-b70b-f544177cac44,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:52.017167 kubelet[1549]: E0213 05:15:52.017133 1549 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:52.017167 kubelet[1549]: E0213 05:15:52.017159 1549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nfs-server-provisioner-0" Feb 13 05:15:52.017216 kubelet[1549]: E0213 05:15:52.017173 1549 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nfs-server-provisioner-0" Feb 13 05:15:52.017216 kubelet[1549]: E0213 05:15:52.017200 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nfs-server-provisioner-0_default(23101915-4b4b-4401-b70b-f544177cac44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nfs-server-provisioner-0_default(23101915-4b4b-4401-b70b-f544177cac44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:15:52.115697 kernel: audit: type=1325 audit(1707801351.955:632): table=filter:69 family=2 entries=32 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:15:52.115731 kernel: audit: type=1300 audit(1707801351.955:632): arch=c000003e syscall=46 success=yes exit=11292 a0=3 a1=7ffcf288ce90 a2=0 a3=7ffcf288ce7c items=0 ppid=1781 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:15:52.115750 kernel: audit: type=1327 audit(1707801351.955:632): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:15:51.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:15:52.175000 audit[3230]: NETFILTER_CFG table=nat:70 family=2 entries=22 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:15:52.175000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=6212 a0=3 a1=7ffcf288ce90 a2=0 a3=31030 items=0 ppid=1781 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 05:15:52.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 13 05:15:52.237453 kernel: audit: type=1325 audit(1707801352.175:633): table=nat:70 family=2 entries=22 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 13 05:15:52.469193 kubelet[1549]: I0213 05:15:52.469096 1549 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:15:52.470290 env[1164]: time="2024-02-13T05:15:52.470178937Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:15:52.496931 env[1164]: time="2024-02-13T05:15:52.496835716Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:52.497147 kubelet[1549]: E0213 05:15:52.497097 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:15:52.497147 kubelet[1549]: E0213 05:15:52.497148 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:15:52.497207 kubelet[1549]: E0213 05:15:52.497173 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:52.497207 kubelet[1549]: E0213 05:15:52.497190 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:15:52.888446 kubelet[1549]: E0213 05:15:52.888202 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:53.888527 kubelet[1549]: E0213 05:15:53.888427 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:54.889740 kubelet[1549]: E0213 05:15:54.889624 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:55.890600 kubelet[1549]: E0213 05:15:55.890505 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:56.891018 kubelet[1549]: E0213 05:15:56.890900 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:57.891371 kubelet[1549]: E0213 05:15:57.891267 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:58.892496 kubelet[1549]: E0213 05:15:58.892384 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:59.884277 env[1164]: time="2024-02-13T05:15:59.884149623Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:15:59.893488 kubelet[1549]: E0213 05:15:59.893405 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:15:59.910283 env[1164]: time="2024-02-13T05:15:59.910247723Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:15:59.910456 kubelet[1549]: E0213 05:15:59.910414 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:15:59.910456 kubelet[1549]: E0213 05:15:59.910445 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:15:59.910524 kubelet[1549]: E0213 05:15:59.910470 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:15:59.910524 kubelet[1549]: E0213 05:15:59.910488 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:16:00.894373 kubelet[1549]: E0213 05:16:00.894254 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:01.895229 kubelet[1549]: E0213 05:16:01.895133 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:02.895610 kubelet[1549]: E0213 05:16:02.895497 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:03.884414 env[1164]: time="2024-02-13T05:16:03.884291644Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:16:03.885560 env[1164]: time="2024-02-13T05:16:03.884591342Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:16:03.896511 kubelet[1549]: E0213 05:16:03.896487 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:03.899742 env[1164]: time="2024-02-13T05:16:03.899711035Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:03.899866 env[1164]: time="2024-02-13T05:16:03.899808780Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:03.899899 kubelet[1549]: E0213 05:16:03.899849 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:16:03.899899 kubelet[1549]: E0213 05:16:03.899871 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:16:03.899899 kubelet[1549]: E0213 05:16:03.899885 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:16:03.899899 kubelet[1549]: E0213 05:16:03.899895 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:03.900014 kubelet[1549]: E0213 05:16:03.899900 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:16:03.900014 kubelet[1549]: E0213 05:16:03.899912 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:16:03.900014 kubelet[1549]: E0213 05:16:03.899922 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:03.900014 kubelet[1549]: E0213 05:16:03.899937 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:16:04.897157 kubelet[1549]: E0213 05:16:04.897046 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:05.723628 kubelet[1549]: E0213 05:16:05.723504 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:05.897612 kubelet[1549]: E0213 05:16:05.897516 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:06.898330 kubelet[1549]: E0213 05:16:06.898204 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:07.899248 kubelet[1549]: E0213 05:16:07.899149 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:08.900465 kubelet[1549]: E0213 05:16:08.900351 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:09.901784 kubelet[1549]: E0213 05:16:09.901583 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:10.902391 kubelet[1549]: E0213 05:16:10.902267 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:11.902578 kubelet[1549]: E0213 05:16:11.902472 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:12.903473 kubelet[1549]: E0213 05:16:12.903366 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:13.031138 systemd[1]: Started sshd@10-147.75.90.7:22-104.250.49.231:44462.service. Feb 13 05:16:13.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-147.75.90.7:22-104.250.49.231:44462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:16:13.058587 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 13 05:16:13.058622 kernel: audit: type=1130 audit(1707801373.029:634): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-147.75.90.7:22-104.250.49.231:44462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:16:13.883496 env[1164]: time="2024-02-13T05:16:13.883453072Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:16:13.899882 env[1164]: time="2024-02-13T05:16:13.899850013Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:13.900064 kubelet[1549]: E0213 05:16:13.900028 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:16:13.900064 kubelet[1549]: E0213 05:16:13.900056 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:16:13.900125 kubelet[1549]: E0213 05:16:13.900078 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:13.900125 kubelet[1549]: E0213 05:16:13.900095 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:16:13.904391 kubelet[1549]: E0213 05:16:13.904320 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:14.905698 kubelet[1549]: E0213 05:16:14.905589 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:15.906544 kubelet[1549]: E0213 05:16:15.906433 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:16.884817 env[1164]: time="2024-02-13T05:16:16.884717837Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:16:16.907584 kubelet[1549]: E0213 05:16:16.907545 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:16.913903 env[1164]: time="2024-02-13T05:16:16.913861000Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:16.914141 kubelet[1549]: E0213 05:16:16.914131 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:16:16.914174 kubelet[1549]: E0213 05:16:16.914156 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:16:16.914217 kubelet[1549]: E0213 05:16:16.914179 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:16.914217 kubelet[1549]: E0213 05:16:16.914195 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:16:17.563765 sshd[3364]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.250.49.231 user=root Feb 13 05:16:17.562000 audit[3364]: USER_AUTH pid=3364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=104.250.49.231 addr=104.250.49.231 terminal=ssh res=failed' Feb 13 05:16:17.655393 kernel: audit: type=1100 audit(1707801377.562:635): pid=3364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=104.250.49.231 addr=104.250.49.231 terminal=ssh res=failed' Feb 13 05:16:17.885069 env[1164]: time="2024-02-13T05:16:17.884813600Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:16:17.908696 kubelet[1549]: E0213 05:16:17.908653 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:17.911467 env[1164]: time="2024-02-13T05:16:17.911428799Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:17.911677 kubelet[1549]: E0213 05:16:17.911639 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:16:17.911677 kubelet[1549]: E0213 05:16:17.911660 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:16:17.911736 kubelet[1549]: E0213 05:16:17.911681 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:17.911736 kubelet[1549]: E0213 05:16:17.911701 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:16:18.909204 kubelet[1549]: E0213 05:16:18.909097 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:19.909777 kubelet[1549]: E0213 05:16:19.909660 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:20.078658 sshd[3364]: Failed password for root from 104.250.49.231 port 44462 ssh2 Feb 13 05:16:20.910816 kubelet[1549]: E0213 05:16:20.910703 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:21.911013 kubelet[1549]: E0213 05:16:21.910905 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:22.911994 kubelet[1549]: E0213 05:16:22.911883 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:23.912910 kubelet[1549]: E0213 05:16:23.912803 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:24.360012 sshd[3364]: Received disconnect from 104.250.49.231 port 44462:11: Bye Bye [preauth] Feb 13 05:16:24.360012 sshd[3364]: Disconnected from authenticating user root 104.250.49.231 port 44462 [preauth] Feb 13 05:16:24.362468 systemd[1]: sshd@10-147.75.90.7:22-104.250.49.231:44462.service: Deactivated successfully. Feb 13 05:16:24.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-147.75.90.7:22-104.250.49.231:44462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:16:24.455530 kernel: audit: type=1131 audit(1707801384.361:636): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-147.75.90.7:22-104.250.49.231:44462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:16:24.913635 kubelet[1549]: E0213 05:16:24.913527 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:25.723156 kubelet[1549]: E0213 05:16:25.723049 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:25.885715 env[1164]: time="2024-02-13T05:16:25.885600240Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:16:25.902647 env[1164]: time="2024-02-13T05:16:25.902581799Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:25.902806 kubelet[1549]: E0213 05:16:25.902765 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:16:25.902806 kubelet[1549]: E0213 05:16:25.902793 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:16:25.902880 kubelet[1549]: E0213 05:16:25.902817 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:25.902880 kubelet[1549]: E0213 05:16:25.902836 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:16:25.914537 kubelet[1549]: E0213 05:16:25.914457 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:26.915361 kubelet[1549]: E0213 05:16:26.915229 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:27.916724 kubelet[1549]: E0213 05:16:27.916604 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:28.917377 kubelet[1549]: E0213 05:16:28.917250 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:29.918135 kubelet[1549]: E0213 05:16:29.917990 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:30.884912 env[1164]: time="2024-02-13T05:16:30.884762057Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:16:30.884912 env[1164]: time="2024-02-13T05:16:30.884762050Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:16:30.910926 env[1164]: time="2024-02-13T05:16:30.910887632Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:30.910926 env[1164]: time="2024-02-13T05:16:30.910908945Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:30.911089 kubelet[1549]: E0213 05:16:30.911048 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:16:30.911089 kubelet[1549]: E0213 05:16:30.911075 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:16:30.911147 kubelet[1549]: E0213 05:16:30.911096 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:30.911147 kubelet[1549]: E0213 05:16:30.911115 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:16:30.911147 kubelet[1549]: E0213 05:16:30.911049 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:16:30.911147 kubelet[1549]: E0213 05:16:30.911133 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:16:30.911263 kubelet[1549]: E0213 05:16:30.911151 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:30.911263 kubelet[1549]: E0213 05:16:30.911165 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:16:30.918181 kubelet[1549]: E0213 05:16:30.918145 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:31.919060 kubelet[1549]: E0213 05:16:31.918946 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:32.920209 kubelet[1549]: E0213 05:16:32.920099 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:33.920463 kubelet[1549]: E0213 05:16:33.920326 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:34.921441 kubelet[1549]: E0213 05:16:34.921311 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:35.922082 kubelet[1549]: E0213 05:16:35.921975 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:36.885159 env[1164]: time="2024-02-13T05:16:36.885066764Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:16:36.911672 env[1164]: time="2024-02-13T05:16:36.911636051Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:36.911867 kubelet[1549]: E0213 05:16:36.911822 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:16:36.911867 kubelet[1549]: E0213 05:16:36.911848 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:16:36.911932 kubelet[1549]: E0213 05:16:36.911872 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:36.911932 kubelet[1549]: E0213 05:16:36.911890 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:16:36.923151 kubelet[1549]: E0213 05:16:36.923112 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:37.923949 kubelet[1549]: E0213 05:16:37.923835 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:38.924775 kubelet[1549]: E0213 05:16:38.924663 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:39.925989 kubelet[1549]: E0213 05:16:39.925880 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:40.926381 kubelet[1549]: E0213 05:16:40.926270 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:41.926586 kubelet[1549]: E0213 05:16:41.926531 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:42.927558 kubelet[1549]: E0213 05:16:42.927448 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:43.884625 env[1164]: time="2024-02-13T05:16:43.884525035Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:16:43.911134 env[1164]: time="2024-02-13T05:16:43.911074372Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:43.911260 kubelet[1549]: E0213 05:16:43.911250 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:16:43.911294 kubelet[1549]: E0213 05:16:43.911274 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:16:43.911316 kubelet[1549]: E0213 05:16:43.911296 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:43.911316 kubelet[1549]: E0213 05:16:43.911312 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:16:43.928557 kubelet[1549]: E0213 05:16:43.928518 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:44.884937 env[1164]: time="2024-02-13T05:16:44.884810955Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:16:44.911022 env[1164]: time="2024-02-13T05:16:44.910960121Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:44.911136 kubelet[1549]: E0213 05:16:44.911126 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:16:44.911170 kubelet[1549]: E0213 05:16:44.911151 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:16:44.911193 kubelet[1549]: E0213 05:16:44.911173 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:44.911193 kubelet[1549]: E0213 05:16:44.911190 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:16:44.929671 kubelet[1549]: E0213 05:16:44.929624 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:45.722665 kubelet[1549]: E0213 05:16:45.722550 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:45.930385 kubelet[1549]: E0213 05:16:45.930276 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:46.930589 kubelet[1549]: E0213 05:16:46.930478 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:47.930823 kubelet[1549]: E0213 05:16:47.930715 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:48.931988 kubelet[1549]: E0213 05:16:48.931876 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:49.885096 env[1164]: time="2024-02-13T05:16:49.884998218Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:16:49.911726 env[1164]: time="2024-02-13T05:16:49.911665109Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:49.911810 kubelet[1549]: E0213 05:16:49.911793 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:16:49.911847 kubelet[1549]: E0213 05:16:49.911819 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:16:49.911847 kubelet[1549]: E0213 05:16:49.911839 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:49.911913 kubelet[1549]: E0213 05:16:49.911856 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:16:49.932065 kubelet[1549]: E0213 05:16:49.932001 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:50.932912 kubelet[1549]: E0213 05:16:50.932810 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:51.933886 kubelet[1549]: E0213 05:16:51.933779 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:52.934538 kubelet[1549]: E0213 05:16:52.934428 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:53.935045 kubelet[1549]: E0213 05:16:53.934927 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:54.935232 kubelet[1549]: E0213 05:16:54.935089 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:55.935917 kubelet[1549]: E0213 05:16:55.935810 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:56.884965 env[1164]: time="2024-02-13T05:16:56.884813976Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:16:56.913920 env[1164]: time="2024-02-13T05:16:56.913861926Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:56.914060 kubelet[1549]: E0213 05:16:56.914018 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:16:56.914060 kubelet[1549]: E0213 05:16:56.914042 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:16:56.914114 kubelet[1549]: E0213 05:16:56.914062 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:56.914114 kubelet[1549]: E0213 05:16:56.914078 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:16:56.936271 kubelet[1549]: E0213 05:16:56.936220 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:57.936746 kubelet[1549]: E0213 05:16:57.936638 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:58.937785 kubelet[1549]: E0213 05:16:58.937706 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:16:59.885042 env[1164]: time="2024-02-13T05:16:59.884936268Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:16:59.911216 env[1164]: time="2024-02-13T05:16:59.911154699Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:16:59.911348 kubelet[1549]: E0213 05:16:59.911335 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:16:59.911402 kubelet[1549]: E0213 05:16:59.911368 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:16:59.911441 kubelet[1549]: E0213 05:16:59.911402 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:16:59.911497 kubelet[1549]: E0213 05:16:59.911446 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:16:59.938899 kubelet[1549]: E0213 05:16:59.938873 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:00.939788 kubelet[1549]: E0213 05:17:00.939706 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:01.940713 kubelet[1549]: E0213 05:17:01.940634 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:02.941770 kubelet[1549]: E0213 05:17:02.941665 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:03.884486 env[1164]: time="2024-02-13T05:17:03.884321897Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:17:03.911803 env[1164]: time="2024-02-13T05:17:03.911743493Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:03.911898 kubelet[1549]: E0213 05:17:03.911884 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:17:03.911938 kubelet[1549]: E0213 05:17:03.911909 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:17:03.911938 kubelet[1549]: E0213 05:17:03.911931 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:03.912027 kubelet[1549]: E0213 05:17:03.911949 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:17:03.942433 kubelet[1549]: E0213 05:17:03.942379 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:04.943566 kubelet[1549]: E0213 05:17:04.943492 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:05.722821 kubelet[1549]: E0213 05:17:05.722752 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:05.944180 kubelet[1549]: E0213 05:17:05.944072 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:06.944374 kubelet[1549]: E0213 05:17:06.944275 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:07.945531 kubelet[1549]: E0213 05:17:07.945428 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:08.945711 kubelet[1549]: E0213 05:17:08.945632 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:09.884321 env[1164]: time="2024-02-13T05:17:09.884216320Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:17:09.913894 env[1164]: time="2024-02-13T05:17:09.913833209Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:09.914018 kubelet[1549]: E0213 05:17:09.913997 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:17:09.914056 kubelet[1549]: E0213 05:17:09.914022 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:17:09.914056 kubelet[1549]: E0213 05:17:09.914042 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:09.914119 kubelet[1549]: E0213 05:17:09.914057 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:17:09.946654 kubelet[1549]: E0213 05:17:09.946596 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:10.885150 env[1164]: time="2024-02-13T05:17:10.885019847Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:17:10.914375 env[1164]: time="2024-02-13T05:17:10.914312101Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:10.914508 kubelet[1549]: E0213 05:17:10.914496 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:17:10.914562 kubelet[1549]: E0213 05:17:10.914524 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:17:10.914562 kubelet[1549]: E0213 05:17:10.914559 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:10.914644 kubelet[1549]: E0213 05:17:10.914586 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:17:10.947437 kubelet[1549]: E0213 05:17:10.947305 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:11.947942 kubelet[1549]: E0213 05:17:11.947864 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:12.948618 kubelet[1549]: E0213 05:17:12.948503 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:13.949737 kubelet[1549]: E0213 05:17:13.949628 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:14.885261 env[1164]: time="2024-02-13T05:17:14.885104638Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:17:14.910787 env[1164]: time="2024-02-13T05:17:14.910723327Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:14.910885 kubelet[1549]: E0213 05:17:14.910859 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:17:14.910885 kubelet[1549]: E0213 05:17:14.910883 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:17:14.910938 kubelet[1549]: E0213 05:17:14.910904 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:14.910938 kubelet[1549]: E0213 05:17:14.910920 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:17:14.950561 kubelet[1549]: E0213 05:17:14.950502 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:15.951029 kubelet[1549]: E0213 05:17:15.950919 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:16.951874 kubelet[1549]: E0213 05:17:16.951764 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:17.952234 kubelet[1549]: E0213 05:17:17.952161 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:18.953036 kubelet[1549]: E0213 05:17:18.952968 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:19.953389 kubelet[1549]: E0213 05:17:19.953233 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:20.953631 kubelet[1549]: E0213 05:17:20.953551 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:21.954732 kubelet[1549]: E0213 05:17:21.954654 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:22.884180 env[1164]: time="2024-02-13T05:17:22.884075275Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:17:22.909920 env[1164]: time="2024-02-13T05:17:22.909845039Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:22.910084 kubelet[1549]: E0213 05:17:22.910074 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:17:22.910135 kubelet[1549]: E0213 05:17:22.910099 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:17:22.910179 kubelet[1549]: E0213 05:17:22.910136 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:22.910179 kubelet[1549]: E0213 05:17:22.910152 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:17:22.955790 kubelet[1549]: E0213 05:17:22.955761 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:23.885265 env[1164]: time="2024-02-13T05:17:23.885176250Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:17:23.914832 env[1164]: time="2024-02-13T05:17:23.914761929Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:23.914949 kubelet[1549]: E0213 05:17:23.914939 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:17:23.914996 kubelet[1549]: E0213 05:17:23.914966 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:17:23.914996 kubelet[1549]: E0213 05:17:23.914986 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:23.915061 kubelet[1549]: E0213 05:17:23.915002 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:17:23.956734 kubelet[1549]: E0213 05:17:23.956661 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:24.957044 kubelet[1549]: E0213 05:17:24.956932 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:25.722729 kubelet[1549]: E0213 05:17:25.722635 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:25.957695 kubelet[1549]: E0213 05:17:25.957585 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:26.958353 kubelet[1549]: E0213 05:17:26.958228 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:27.885130 env[1164]: time="2024-02-13T05:17:27.884995043Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:17:27.914810 env[1164]: time="2024-02-13T05:17:27.914732236Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:27.914989 kubelet[1549]: E0213 05:17:27.914979 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:17:27.915024 kubelet[1549]: E0213 05:17:27.915003 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:17:27.915046 kubelet[1549]: E0213 05:17:27.915024 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:27.915046 kubelet[1549]: E0213 05:17:27.915042 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:17:27.958609 kubelet[1549]: E0213 05:17:27.958496 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:28.958795 kubelet[1549]: E0213 05:17:28.958649 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:29.959965 kubelet[1549]: E0213 05:17:29.959852 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:30.960117 kubelet[1549]: E0213 05:17:30.960006 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:31.960320 kubelet[1549]: E0213 05:17:31.960211 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:32.961257 kubelet[1549]: E0213 05:17:32.961147 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:33.961981 kubelet[1549]: E0213 05:17:33.961870 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:34.962739 kubelet[1549]: E0213 05:17:34.962621 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:35.885383 env[1164]: time="2024-02-13T05:17:35.885258645Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:17:35.914032 env[1164]: time="2024-02-13T05:17:35.914000986Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:35.914223 kubelet[1549]: E0213 05:17:35.914186 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:17:35.914223 kubelet[1549]: E0213 05:17:35.914210 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:17:35.914279 kubelet[1549]: E0213 05:17:35.914232 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:35.914279 kubelet[1549]: E0213 05:17:35.914250 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:17:35.962814 kubelet[1549]: E0213 05:17:35.962760 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:36.963739 kubelet[1549]: E0213 05:17:36.963630 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:37.885158 env[1164]: time="2024-02-13T05:17:37.885009283Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:17:37.912655 env[1164]: time="2024-02-13T05:17:37.912624018Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:37.912770 kubelet[1549]: E0213 05:17:37.912760 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:17:37.912808 kubelet[1549]: E0213 05:17:37.912784 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:17:37.912808 kubelet[1549]: E0213 05:17:37.912806 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:37.912870 kubelet[1549]: E0213 05:17:37.912824 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:17:37.964609 kubelet[1549]: E0213 05:17:37.964548 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:38.964927 kubelet[1549]: E0213 05:17:38.964813 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:39.884441 env[1164]: time="2024-02-13T05:17:39.884206667Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:17:39.910596 env[1164]: time="2024-02-13T05:17:39.910533420Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:39.910757 kubelet[1549]: E0213 05:17:39.910715 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:17:39.910757 kubelet[1549]: E0213 05:17:39.910739 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:17:39.910832 kubelet[1549]: E0213 05:17:39.910762 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:39.910832 kubelet[1549]: E0213 05:17:39.910779 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:17:39.965205 kubelet[1549]: E0213 05:17:39.965096 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:40.965553 kubelet[1549]: E0213 05:17:40.965446 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:41.966007 kubelet[1549]: E0213 05:17:41.965893 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:42.966774 kubelet[1549]: E0213 05:17:42.966662 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:43.968027 kubelet[1549]: E0213 05:17:43.967917 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:44.968400 kubelet[1549]: E0213 05:17:44.968293 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:45.723507 kubelet[1549]: E0213 05:17:45.723438 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:45.968705 kubelet[1549]: E0213 05:17:45.968622 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:46.969659 kubelet[1549]: E0213 05:17:46.969580 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:47.970738 kubelet[1549]: E0213 05:17:47.970658 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:48.971758 kubelet[1549]: E0213 05:17:48.971640 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:49.885292 env[1164]: time="2024-02-13T05:17:49.885185661Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:17:49.911750 env[1164]: time="2024-02-13T05:17:49.911658361Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:49.911934 kubelet[1549]: E0213 05:17:49.911920 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:17:49.911986 kubelet[1549]: E0213 05:17:49.911952 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:17:49.911986 kubelet[1549]: E0213 05:17:49.911984 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:49.912066 kubelet[1549]: E0213 05:17:49.912010 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:17:49.972536 kubelet[1549]: E0213 05:17:49.972504 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:50.885045 env[1164]: time="2024-02-13T05:17:50.884948064Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:17:50.913748 env[1164]: time="2024-02-13T05:17:50.913715065Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:50.914008 kubelet[1549]: E0213 05:17:50.913918 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:17:50.914008 kubelet[1549]: E0213 05:17:50.913949 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:17:50.914008 kubelet[1549]: E0213 05:17:50.913978 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:50.914008 kubelet[1549]: E0213 05:17:50.914001 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:17:50.972739 kubelet[1549]: E0213 05:17:50.972665 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:51.973746 kubelet[1549]: E0213 05:17:51.973636 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:52.885139 env[1164]: time="2024-02-13T05:17:52.885045115Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:17:52.911493 env[1164]: time="2024-02-13T05:17:52.911432543Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:17:52.911653 kubelet[1549]: E0213 05:17:52.911611 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:17:52.911653 kubelet[1549]: E0213 05:17:52.911637 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:17:52.911722 kubelet[1549]: E0213 05:17:52.911659 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:17:52.911722 kubelet[1549]: E0213 05:17:52.911676 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:17:52.974719 kubelet[1549]: E0213 05:17:52.974650 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:53.975968 kubelet[1549]: E0213 05:17:53.975850 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:54.976371 kubelet[1549]: E0213 05:17:54.976261 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:55.976672 kubelet[1549]: E0213 05:17:55.976564 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:56.977517 kubelet[1549]: E0213 05:17:56.977410 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:57.978218 kubelet[1549]: E0213 05:17:57.978111 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:58.979399 kubelet[1549]: E0213 05:17:58.979289 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:17:59.979798 kubelet[1549]: E0213 05:17:59.979690 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:00.980711 kubelet[1549]: E0213 05:18:00.980601 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:01.981896 kubelet[1549]: E0213 05:18:01.981780 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:02.884972 env[1164]: time="2024-02-13T05:18:02.884847792Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:18:02.911617 env[1164]: time="2024-02-13T05:18:02.911558120Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:02.911766 kubelet[1549]: E0213 05:18:02.911725 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:18:02.911766 kubelet[1549]: E0213 05:18:02.911751 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:18:02.911831 kubelet[1549]: E0213 05:18:02.911772 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:02.911831 kubelet[1549]: E0213 05:18:02.911789 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:18:02.982656 kubelet[1549]: E0213 05:18:02.982545 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:03.983716 kubelet[1549]: E0213 05:18:03.983604 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:04.884695 env[1164]: time="2024-02-13T05:18:04.884565010Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:18:04.910588 env[1164]: time="2024-02-13T05:18:04.910516448Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:04.910744 kubelet[1549]: E0213 05:18:04.910705 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:18:04.910744 kubelet[1549]: E0213 05:18:04.910731 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:18:04.910815 kubelet[1549]: E0213 05:18:04.910753 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:04.910815 kubelet[1549]: E0213 05:18:04.910772 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:18:04.984954 kubelet[1549]: E0213 05:18:04.984845 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:05.723522 kubelet[1549]: E0213 05:18:05.723417 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:05.884968 env[1164]: time="2024-02-13T05:18:05.884834804Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:18:05.899323 env[1164]: time="2024-02-13T05:18:05.899282906Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:05.899537 kubelet[1549]: E0213 05:18:05.899495 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:18:05.899537 kubelet[1549]: E0213 05:18:05.899519 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:18:05.899610 kubelet[1549]: E0213 05:18:05.899543 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:05.899610 kubelet[1549]: E0213 05:18:05.899563 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:18:05.985994 kubelet[1549]: E0213 05:18:05.985771 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:06.986197 kubelet[1549]: E0213 05:18:06.986083 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:07.986901 kubelet[1549]: E0213 05:18:07.986789 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:08.987679 kubelet[1549]: E0213 05:18:08.987571 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:09.988448 kubelet[1549]: E0213 05:18:09.988321 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:10.989118 kubelet[1549]: E0213 05:18:10.989008 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:11.989459 kubelet[1549]: E0213 05:18:11.989304 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:12.990566 kubelet[1549]: E0213 05:18:12.990456 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:13.991775 kubelet[1549]: E0213 05:18:13.991666 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:14.991932 kubelet[1549]: E0213 05:18:14.991824 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:15.992567 kubelet[1549]: E0213 05:18:15.992457 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:16.884896 env[1164]: time="2024-02-13T05:18:16.884801413Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:18:16.910906 env[1164]: time="2024-02-13T05:18:16.910844848Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:16.911016 kubelet[1549]: E0213 05:18:16.911006 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:18:16.911063 kubelet[1549]: E0213 05:18:16.911038 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:18:16.911097 kubelet[1549]: E0213 05:18:16.911071 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:16.911097 kubelet[1549]: E0213 05:18:16.911096 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:18:16.993043 kubelet[1549]: E0213 05:18:16.992970 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:17.993316 kubelet[1549]: E0213 05:18:17.993240 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:18.884812 env[1164]: time="2024-02-13T05:18:18.884662428Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:18:18.913677 env[1164]: time="2024-02-13T05:18:18.913604077Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:18.913884 kubelet[1549]: E0213 05:18:18.913818 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:18:18.913884 kubelet[1549]: E0213 05:18:18.913879 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:18:18.913976 kubelet[1549]: E0213 05:18:18.913912 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:18.913976 kubelet[1549]: E0213 05:18:18.913929 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:18:18.994274 kubelet[1549]: E0213 05:18:18.994165 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:19.995443 kubelet[1549]: E0213 05:18:19.995326 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:20.885232 env[1164]: time="2024-02-13T05:18:20.885099270Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:18:20.914784 env[1164]: time="2024-02-13T05:18:20.914750041Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:20.914931 kubelet[1549]: E0213 05:18:20.914919 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:18:20.914983 kubelet[1549]: E0213 05:18:20.914946 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:18:20.914983 kubelet[1549]: E0213 05:18:20.914980 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:20.915064 kubelet[1549]: E0213 05:18:20.915007 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:18:20.995700 kubelet[1549]: E0213 05:18:20.995633 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:21.996125 kubelet[1549]: E0213 05:18:21.996051 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:22.997202 kubelet[1549]: E0213 05:18:22.997090 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:23.997576 kubelet[1549]: E0213 05:18:23.997468 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:24.998442 kubelet[1549]: E0213 05:18:24.998363 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:25.722628 kubelet[1549]: E0213 05:18:25.722548 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:25.999363 kubelet[1549]: E0213 05:18:25.999124 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:26.999857 kubelet[1549]: E0213 05:18:26.999780 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:28.000538 kubelet[1549]: E0213 05:18:28.000462 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:29.001770 kubelet[1549]: E0213 05:18:29.001657 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:30.002797 kubelet[1549]: E0213 05:18:30.002721 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:31.003016 kubelet[1549]: E0213 05:18:31.002941 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:31.884355 env[1164]: time="2024-02-13T05:18:31.884238826Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:18:31.884355 env[1164]: time="2024-02-13T05:18:31.884238818Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:18:31.911421 env[1164]: time="2024-02-13T05:18:31.911337420Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:31.911557 kubelet[1549]: E0213 05:18:31.911515 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:18:31.911557 kubelet[1549]: E0213 05:18:31.911547 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:18:31.911621 kubelet[1549]: E0213 05:18:31.911578 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:31.911621 kubelet[1549]: E0213 05:18:31.911605 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:18:31.911749 env[1164]: time="2024-02-13T05:18:31.911730367Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:31.911820 kubelet[1549]: E0213 05:18:31.911814 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:18:31.911843 kubelet[1549]: E0213 05:18:31.911829 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:18:31.911864 kubelet[1549]: E0213 05:18:31.911847 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:31.911864 kubelet[1549]: E0213 05:18:31.911862 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:18:32.004256 kubelet[1549]: E0213 05:18:32.004150 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:32.404175 systemd[1]: Started sshd@11-147.75.90.7:22-104.250.49.231:60818.service. Feb 13 05:18:32.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-147.75.90.7:22-104.250.49.231:60818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:18:32.497533 kernel: audit: type=1130 audit(1707801512.402:637): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-147.75.90.7:22-104.250.49.231:60818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:18:33.005374 kubelet[1549]: E0213 05:18:33.005292 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:34.005945 kubelet[1549]: E0213 05:18:34.005833 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:34.605117 sshd[4321]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.250.49.231 user=root Feb 13 05:18:34.604000 audit[4321]: USER_AUTH pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=104.250.49.231 addr=104.250.49.231 terminal=ssh res=failed' Feb 13 05:18:34.697523 kernel: audit: type=1100 audit(1707801514.604:638): pid=4321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=104.250.49.231 addr=104.250.49.231 terminal=ssh res=failed' Feb 13 05:18:35.006979 kubelet[1549]: E0213 05:18:35.006870 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:35.885473 env[1164]: time="2024-02-13T05:18:35.885321300Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:18:35.937043 env[1164]: time="2024-02-13T05:18:35.936951085Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:35.937211 kubelet[1549]: E0213 05:18:35.937189 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:18:35.937294 kubelet[1549]: E0213 05:18:35.937229 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:18:35.937294 kubelet[1549]: E0213 05:18:35.937273 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:35.937445 kubelet[1549]: E0213 05:18:35.937308 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:18:36.008115 kubelet[1549]: E0213 05:18:36.008009 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:36.257120 sshd[4321]: Failed password for root from 104.250.49.231 port 60818 ssh2 Feb 13 05:18:37.008963 kubelet[1549]: E0213 05:18:37.008846 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:38.009711 kubelet[1549]: E0213 05:18:38.009598 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:38.949645 sshd[4321]: Received disconnect from 104.250.49.231 port 60818:11: Bye Bye [preauth] Feb 13 05:18:38.949645 sshd[4321]: Disconnected from authenticating user root 104.250.49.231 port 60818 [preauth] Feb 13 05:18:38.952132 systemd[1]: sshd@11-147.75.90.7:22-104.250.49.231:60818.service: Deactivated successfully. Feb 13 05:18:38.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-147.75.90.7:22-104.250.49.231:60818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:18:39.010763 kubelet[1549]: E0213 05:18:39.010724 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:39.044424 kernel: audit: type=1131 audit(1707801518.951:639): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-147.75.90.7:22-104.250.49.231:60818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 05:18:40.010995 kubelet[1549]: E0213 05:18:40.010882 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:41.011779 kubelet[1549]: E0213 05:18:41.011669 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:42.012717 kubelet[1549]: E0213 05:18:42.012603 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:43.013290 kubelet[1549]: E0213 05:18:43.013178 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:43.884475 env[1164]: time="2024-02-13T05:18:43.884293099Z" level=info msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\"" Feb 13 05:18:43.884475 env[1164]: time="2024-02-13T05:18:43.884293090Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:18:43.911041 env[1164]: time="2024-02-13T05:18:43.910987056Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:43.911176 env[1164]: time="2024-02-13T05:18:43.911032927Z" level=error msg="StopPodSandbox for \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\" failed" error="failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:43.911227 kubelet[1549]: E0213 05:18:43.911217 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:18:43.911257 kubelet[1549]: E0213 05:18:43.911245 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:18:43.911278 kubelet[1549]: E0213 05:18:43.911266 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:43.911338 kubelet[1549]: E0213 05:18:43.911282 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:18:43.911338 kubelet[1549]: E0213 05:18:43.911216 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7" Feb 13 05:18:43.911338 kubelet[1549]: E0213 05:18:43.911316 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7} Feb 13 05:18:43.911338 kubelet[1549]: E0213 05:18:43.911339 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:43.911487 kubelet[1549]: E0213 05:18:43.911367 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23101915-4b4b-4401-b70b-f544177cac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e1c5117cf63e1b28dcf34cea11c83d7872b8a55d7dc6dbee67fd5aef1782ea7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nfs-server-provisioner-0" podUID=23101915-4b4b-4401-b70b-f544177cac44 Feb 13 05:18:44.014351 kubelet[1549]: E0213 05:18:44.014230 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:45.014978 kubelet[1549]: E0213 05:18:45.014860 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:45.723132 kubelet[1549]: E0213 05:18:45.723025 1549 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:46.016158 kubelet[1549]: E0213 05:18:46.015906 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:47.016382 kubelet[1549]: E0213 05:18:47.016222 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:47.884971 env[1164]: time="2024-02-13T05:18:47.884861057Z" level=info msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\"" Feb 13 05:18:47.911319 env[1164]: time="2024-02-13T05:18:47.911284772Z" level=error msg="StopPodSandbox for \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\" failed" error="failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:47.911516 kubelet[1549]: E0213 05:18:47.911484 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143" Feb 13 05:18:47.911516 kubelet[1549]: E0213 05:18:47.911511 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143} Feb 13 05:18:47.911602 kubelet[1549]: E0213 05:18:47.911533 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:47.911602 kubelet[1549]: E0213 05:18:47.911553 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae936456-294f-41c6-9471-c93c49d5b396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85c26ca3530e924515912cab1a22c9c0bbb9d45ef0c14e17397db7fd49703143\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wz5l" podUID=ae936456-294f-41c6-9471-c93c49d5b396 Feb 13 05:18:48.017096 kubelet[1549]: E0213 05:18:48.016991 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:49.017774 kubelet[1549]: E0213 05:18:49.017656 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:50.017924 kubelet[1549]: E0213 05:18:50.017817 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:51.018767 kubelet[1549]: E0213 05:18:51.018652 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:52.019058 kubelet[1549]: E0213 05:18:52.018949 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:53.019969 kubelet[1549]: E0213 05:18:53.019859 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:54.021216 kubelet[1549]: E0213 05:18:54.021102 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:54.884136 env[1164]: time="2024-02-13T05:18:54.884035126Z" level=info msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\"" Feb 13 05:18:54.910832 env[1164]: time="2024-02-13T05:18:54.910768057Z" level=error msg="StopPodSandbox for \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\" failed" error="failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 05:18:54.910948 kubelet[1549]: E0213 05:18:54.910934 1549 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5" Feb 13 05:18:54.911002 kubelet[1549]: E0213 05:18:54.910967 1549 kuberuntime_manager.go:1312] "Failed to stop sandbox" podSandboxID={Type:containerd ID:5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5} Feb 13 05:18:54.911045 kubelet[1549]: E0213 05:18:54.911003 1549 kuberuntime_manager.go:1038] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 05:18:54.911045 kubelet[1549]: E0213 05:18:54.911030 1549 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce10029-50d6-400a-921e-7fefb7347d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5225f5951a6cd1ff9f74eff979c17966671bc2e6e92dd70cafb3da1a689bcfb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-845c78c8b9-w5865" podUID=7ce10029-50d6-400a-921e-7fefb7347d49 Feb 13 05:18:55.022048 kubelet[1549]: E0213 05:18:55.021941 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:56.022622 kubelet[1549]: E0213 05:18:56.022511 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 05:18:57.023770 kubelet[1549]: E0213 05:18:57.023658 1549 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"