Feb 9 09:47:45.544806 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 9 09:47:45.544819 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 09:47:45.544825 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 09:47:45.544829 kernel: BIOS-provided physical RAM map: Feb 9 09:47:45.544833 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 09:47:45.544836 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 09:47:45.544841 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 09:47:45.544846 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 09:47:45.544849 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 09:47:45.544853 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfbbfff] usable Feb 9 09:47:45.544857 kernel: BIOS-e820: [mem 0x000000006dfbc000-0x000000006dfbcfff] ACPI NVS Feb 9 09:47:45.544861 kernel: BIOS-e820: [mem 0x000000006dfbd000-0x000000006dfbdfff] reserved Feb 9 09:47:45.544864 kernel: BIOS-e820: [mem 0x000000006dfbe000-0x0000000077fc4fff] usable Feb 9 09:47:45.544868 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Feb 9 09:47:45.544874 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Feb 9 09:47:45.544878 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Feb 9 09:47:45.544882 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Feb 9 09:47:45.544886 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Feb 9 09:47:45.544890 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Feb 9 09:47:45.544894 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 09:47:45.544898 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 09:47:45.544902 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 09:47:45.544906 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 09:47:45.544911 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 09:47:45.544915 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Feb 9 09:47:45.544919 kernel: NX (Execute Disable) protection: active Feb 9 09:47:45.544924 kernel: SMBIOS 3.2.1 present. Feb 9 09:47:45.544928 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 9 09:47:45.544932 kernel: tsc: Detected 3400.000 MHz processor Feb 9 09:47:45.544936 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 09:47:45.544940 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 09:47:45.544945 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 09:47:45.544949 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Feb 9 09:47:45.544954 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 09:47:45.544959 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Feb 9 09:47:45.544963 kernel: Using GB pages for direct mapping Feb 9 09:47:45.544967 kernel: ACPI: Early table checksum verification disabled Feb 9 09:47:45.544971 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 09:47:45.544976 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 09:47:45.544980 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Feb 9 09:47:45.544986 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 09:47:45.544991 kernel: ACPI: FACS 0x0000000079662F80 000040 Feb 9 09:47:45.544996 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Feb 9 09:47:45.545001 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Feb 9 09:47:45.545005 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 09:47:45.545010 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 09:47:45.545015 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 09:47:45.545020 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 09:47:45.545025 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 09:47:45.545029 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 09:47:45.545034 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:47:45.545038 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 09:47:45.545043 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 09:47:45.545048 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:47:45.545052 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:47:45.545057 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 09:47:45.545062 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 09:47:45.545067 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:47:45.545071 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:47:45.545076 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 09:47:45.545080 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 9 09:47:45.545085 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 09:47:45.545090 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 09:47:45.545094 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 09:47:45.545100 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 \xf5m 01072009 AMI 00010013) Feb 9 09:47:45.545105 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 09:47:45.545109 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 09:47:45.545114 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 09:47:45.545118 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 09:47:45.545123 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 09:47:45.545128 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Feb 9 09:47:45.545132 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Feb 9 09:47:45.545137 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Feb 9 09:47:45.545142 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Feb 9 09:47:45.545147 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Feb 9 09:47:45.545151 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Feb 9 09:47:45.545156 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Feb 9 09:47:45.545161 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Feb 9 09:47:45.545165 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Feb 9 09:47:45.545170 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Feb 9 09:47:45.545174 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Feb 9 09:47:45.545179 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Feb 9 09:47:45.545184 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Feb 9 09:47:45.545189 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Feb 9 09:47:45.545193 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Feb 9 09:47:45.545198 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Feb 9 09:47:45.545202 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Feb 9 09:47:45.545207 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Feb 9 09:47:45.545211 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Feb 9 09:47:45.545216 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Feb 9 09:47:45.545220 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Feb 9 09:47:45.545226 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Feb 9 09:47:45.545230 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Feb 9 09:47:45.545235 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Feb 9 09:47:45.545239 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Feb 9 09:47:45.545244 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Feb 9 09:47:45.545248 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Feb 9 09:47:45.545253 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Feb 9 09:47:45.545258 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Feb 9 09:47:45.545262 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Feb 9 09:47:45.545268 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Feb 9 09:47:45.545272 kernel: No NUMA configuration found Feb 9 09:47:45.545277 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Feb 9 09:47:45.545281 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Feb 9 09:47:45.545286 kernel: Zone ranges: Feb 9 09:47:45.545291 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 09:47:45.545295 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 09:47:45.545300 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Feb 9 09:47:45.545304 kernel: Movable zone start for each node Feb 9 09:47:45.545310 kernel: Early memory node ranges Feb 9 09:47:45.545314 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 09:47:45.545319 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 09:47:45.545323 kernel: node 0: [mem 0x0000000040400000-0x000000006dfbbfff] Feb 9 09:47:45.545328 kernel: node 0: [mem 0x000000006dfbe000-0x0000000077fc4fff] Feb 9 09:47:45.545332 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Feb 9 09:47:45.545337 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Feb 9 09:47:45.545341 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Feb 9 09:47:45.545346 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Feb 9 09:47:45.545355 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 09:47:45.545360 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 09:47:45.545365 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 09:47:45.545370 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 09:47:45.545375 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 9 09:47:45.545380 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 9 09:47:45.545385 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Feb 9 09:47:45.545390 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Feb 9 09:47:45.545396 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 09:47:45.545401 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 09:47:45.545406 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 09:47:45.545411 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 09:47:45.545415 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 09:47:45.545420 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 09:47:45.545425 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 09:47:45.545430 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 09:47:45.545435 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 09:47:45.545441 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 09:47:45.545446 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 09:47:45.545451 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 09:47:45.545455 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 09:47:45.545460 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 09:47:45.545465 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 09:47:45.545470 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 09:47:45.545475 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 09:47:45.545497 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 09:47:45.545503 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 09:47:45.545508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 09:47:45.545513 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 09:47:45.545532 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 09:47:45.545536 kernel: TSC deadline timer available Feb 9 09:47:45.545541 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 09:47:45.545546 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Feb 9 09:47:45.545551 kernel: Booting paravirtualized kernel on bare hardware Feb 9 09:47:45.545556 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 09:47:45.545562 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 09:47:45.545567 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 09:47:45.545572 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 09:47:45.545577 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 09:47:45.545581 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222327 Feb 9 09:47:45.545586 kernel: Policy zone: Normal Feb 9 09:47:45.545592 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 09:47:45.545597 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:47:45.545603 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 09:47:45.545608 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 09:47:45.545613 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:47:45.545618 kernel: Memory: 32683728K/33411988K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 9 09:47:45.545623 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 09:47:45.545628 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 09:47:45.545633 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 09:47:45.545637 kernel: rcu: Hierarchical RCU implementation. Feb 9 09:47:45.545643 kernel: rcu: RCU event tracing is enabled. Feb 9 09:47:45.545648 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 09:47:45.545653 kernel: Rude variant of Tasks RCU enabled. Feb 9 09:47:45.545658 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:47:45.545663 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:47:45.545668 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 09:47:45.545673 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 09:47:45.545678 kernel: random: crng init done Feb 9 09:47:45.545683 kernel: Console: colour dummy device 80x25 Feb 9 09:47:45.545688 kernel: printk: console [tty0] enabled Feb 9 09:47:45.545693 kernel: printk: console [ttyS1] enabled Feb 9 09:47:45.545698 kernel: ACPI: Core revision 20210730 Feb 9 09:47:45.545703 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 9 09:47:45.545708 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 09:47:45.545713 kernel: DMAR: Host address width 39 Feb 9 09:47:45.545718 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 9 09:47:45.545723 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 9 09:47:45.545728 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 09:47:45.545733 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 09:47:45.545738 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Feb 9 09:47:45.545743 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Feb 9 09:47:45.545748 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 9 09:47:45.545753 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 09:47:45.545758 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 09:47:45.545763 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 09:47:45.545768 kernel: x2apic enabled Feb 9 09:47:45.545773 kernel: Switched APIC routing to cluster x2apic. Feb 9 09:47:45.545778 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 09:47:45.545783 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 09:47:45.545788 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 09:47:45.545793 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 09:47:45.545798 kernel: process: using mwait in idle threads Feb 9 09:47:45.545803 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 09:47:45.545808 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 09:47:45.545813 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 09:47:45.545818 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:47:45.545823 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 09:47:45.545829 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 09:47:45.545834 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 09:47:45.545838 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 09:47:45.545843 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 09:47:45.545848 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 09:47:45.545853 kernel: TAA: Mitigation: TSX disabled Feb 9 09:47:45.545858 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 09:47:45.545863 kernel: SRBDS: Mitigation: Microcode Feb 9 09:47:45.545868 kernel: GDS: Vulnerable: No microcode Feb 9 09:47:45.545874 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 09:47:45.545879 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 09:47:45.545883 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 09:47:45.545888 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 09:47:45.545893 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 09:47:45.545898 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 09:47:45.545903 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 09:47:45.545908 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 09:47:45.545913 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 09:47:45.545918 kernel: Freeing SMP alternatives memory: 32K Feb 9 09:47:45.545923 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:47:45.545928 kernel: LSM: Security Framework initializing Feb 9 09:47:45.545933 kernel: SELinux: Initializing. Feb 9 09:47:45.545938 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:47:45.545943 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:47:45.545947 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 09:47:45.545952 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 09:47:45.545958 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 09:47:45.545963 kernel: ... version: 4 Feb 9 09:47:45.545968 kernel: ... bit width: 48 Feb 9 09:47:45.545973 kernel: ... generic registers: 4 Feb 9 09:47:45.545978 kernel: ... value mask: 0000ffffffffffff Feb 9 09:47:45.545982 kernel: ... max period: 00007fffffffffff Feb 9 09:47:45.545987 kernel: ... fixed-purpose events: 3 Feb 9 09:47:45.545992 kernel: ... event mask: 000000070000000f Feb 9 09:47:45.545997 kernel: signal: max sigframe size: 2032 Feb 9 09:47:45.546002 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:47:45.546007 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 09:47:45.546012 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:47:45.546017 kernel: x86: Booting SMP configuration: Feb 9 09:47:45.546022 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 09:47:45.546027 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 09:47:45.546032 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 09:47:45.546037 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 09:47:45.546042 kernel: smpboot: Max logical packages: 1 Feb 9 09:47:45.546048 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 09:47:45.546053 kernel: devtmpfs: initialized Feb 9 09:47:45.546058 kernel: x86/mm: Memory block size: 128MB Feb 9 09:47:45.546063 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfbc000-0x6dfbcfff] (4096 bytes) Feb 9 09:47:45.546067 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Feb 9 09:47:45.546072 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:47:45.546077 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 09:47:45.546082 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:47:45.546087 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:47:45.546093 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:47:45.546098 kernel: audit: type=2000 audit(1707472060.120:1): state=initialized audit_enabled=0 res=1 Feb 9 09:47:45.546102 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:47:45.546107 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 09:47:45.546112 kernel: cpuidle: using governor menu Feb 9 09:47:45.546117 kernel: ACPI: bus type PCI registered Feb 9 09:47:45.546122 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:47:45.546127 kernel: dca service started, version 1.12.1 Feb 9 09:47:45.546132 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 09:47:45.546138 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 09:47:45.546142 kernel: PCI: Using configuration type 1 for base access Feb 9 09:47:45.546147 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 09:47:45.546152 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 09:47:45.546157 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:47:45.546162 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:47:45.546167 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:47:45.546171 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:47:45.546176 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:47:45.546182 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:47:45.546187 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:47:45.546192 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:47:45.546197 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:47:45.546202 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 09:47:45.546207 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:47:45.546211 kernel: ACPI: SSDT 0xFFFF96BB40214F00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 09:47:45.546216 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 09:47:45.546221 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:47:45.546227 kernel: ACPI: SSDT 0xFFFF96BB41CEC000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 09:47:45.546232 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:47:45.546237 kernel: ACPI: SSDT 0xFFFF96BB41C5E000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 09:47:45.546241 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:47:45.546246 kernel: ACPI: SSDT 0xFFFF96BB41C5D000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 09:47:45.546251 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:47:45.546256 kernel: ACPI: SSDT 0xFFFF96BB4014B000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 09:47:45.546261 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:47:45.546266 kernel: ACPI: SSDT 0xFFFF96BB41CE9800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 09:47:45.546270 kernel: ACPI: Interpreter enabled Feb 9 09:47:45.546276 kernel: ACPI: PM: (supports S0 S5) Feb 9 09:47:45.546281 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 09:47:45.546286 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 09:47:45.546291 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 09:47:45.546295 kernel: HEST: Table parsing has been initialized. Feb 9 09:47:45.546300 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 09:47:45.546305 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 09:47:45.546310 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 09:47:45.546315 kernel: ACPI: PM: Power Resource [USBC] Feb 9 09:47:45.546321 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 09:47:45.546325 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 09:47:45.546330 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 09:47:45.546335 kernel: ACPI: PM: Power Resource [WRST] Feb 9 09:47:45.546340 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 09:47:45.546345 kernel: ACPI: PM: Power Resource [FN00] Feb 9 09:47:45.546350 kernel: ACPI: PM: Power Resource [FN01] Feb 9 09:47:45.546355 kernel: ACPI: PM: Power Resource [FN02] Feb 9 09:47:45.546359 kernel: ACPI: PM: Power Resource [FN03] Feb 9 09:47:45.546365 kernel: ACPI: PM: Power Resource [FN04] Feb 9 09:47:45.546370 kernel: ACPI: PM: Power Resource [PIN] Feb 9 09:47:45.546375 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 09:47:45.546441 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:47:45.546503 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 09:47:45.546558 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 09:47:45.546565 kernel: PCI host bridge to bus 0000:00 Feb 9 09:47:45.546608 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 09:47:45.546646 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 09:47:45.546681 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 09:47:45.546716 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Feb 9 09:47:45.546750 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 09:47:45.546785 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 09:47:45.546833 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 09:47:45.546880 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 09:47:45.546923 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 09:47:45.546968 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 9 09:47:45.547009 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 9 09:47:45.547056 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 9 09:47:45.547098 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Feb 9 09:47:45.547140 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 9 09:47:45.547181 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 9 09:47:45.547226 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 09:47:45.547266 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Feb 9 09:47:45.547310 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 09:47:45.547350 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Feb 9 09:47:45.547393 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 09:47:45.547436 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Feb 9 09:47:45.547476 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 09:47:45.547537 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 09:47:45.547577 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Feb 9 09:47:45.547618 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Feb 9 09:47:45.547661 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 09:47:45.547704 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 09:47:45.547749 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 09:47:45.547790 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 09:47:45.547835 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 09:47:45.547876 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Feb 9 09:47:45.547923 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 09:47:45.547968 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 09:47:45.548009 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Feb 9 09:47:45.548049 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 09:47:45.548093 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 09:47:45.548133 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Feb 9 09:47:45.548174 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 09:47:45.548218 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 09:47:45.548259 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Feb 9 09:47:45.548300 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Feb 9 09:47:45.548339 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 9 09:47:45.548380 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 9 09:47:45.548420 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 9 09:47:45.548460 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Feb 9 09:47:45.548502 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 09:47:45.548550 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 09:47:45.548591 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 09:47:45.548638 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 09:47:45.548682 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 09:47:45.548728 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 09:47:45.548770 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 09:47:45.548814 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 09:47:45.548855 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 09:47:45.548900 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 9 09:47:45.548943 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 9 09:47:45.548987 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 09:47:45.549028 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 09:47:45.549072 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 09:47:45.549116 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 09:47:45.549157 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Feb 9 09:47:45.549197 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 09:47:45.549245 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 09:47:45.549286 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 09:47:45.549326 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 09:47:45.549374 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 09:47:45.549416 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 09:47:45.549459 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Feb 9 09:47:45.549504 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 9 09:47:45.549549 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 09:47:45.549590 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 09:47:45.549638 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 09:47:45.549681 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 09:47:45.549743 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Feb 9 09:47:45.549783 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 9 09:47:45.549825 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 09:47:45.549867 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 09:47:45.549909 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 09:47:45.549950 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 9 09:47:45.549990 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 09:47:45.550031 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 09:47:45.550075 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 09:47:45.550117 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Feb 9 09:47:45.550162 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 09:47:45.550253 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Feb 9 09:47:45.550294 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 09:47:45.550334 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 09:47:45.550374 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 09:47:45.550413 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 9 09:47:45.550459 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 9 09:47:45.550542 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Feb 9 09:47:45.550585 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 09:47:45.550627 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Feb 9 09:47:45.550668 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 9 09:47:45.550709 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 09:47:45.550750 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 09:47:45.550790 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 9 09:47:45.550830 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 09:47:45.550876 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 09:47:45.550917 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 9 09:47:45.550959 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 09:47:45.551000 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:47:45.551040 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 09:47:45.551080 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 09:47:45.551120 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 9 09:47:45.551166 kernel: pci_bus 0000:08: extended config space not accessible Feb 9 09:47:45.551218 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 09:47:45.551262 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Feb 9 09:47:45.551306 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Feb 9 09:47:45.551349 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 09:47:45.551393 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 09:47:45.551435 kernel: pci 0000:08:00.0: supports D1 D2 Feb 9 09:47:45.551481 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:47:45.551562 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 09:47:45.551603 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 09:47:45.551645 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 9 09:47:45.551652 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 09:47:45.551658 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 09:47:45.551663 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 09:47:45.551668 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 09:47:45.551673 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 09:47:45.551680 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 09:47:45.551685 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 09:47:45.551690 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 09:47:45.551695 kernel: iommu: Default domain type: Translated Feb 9 09:47:45.551700 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 09:47:45.551743 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 9 09:47:45.551787 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 09:47:45.551830 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 9 09:47:45.551837 kernel: vgaarb: loaded Feb 9 09:47:45.551844 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:47:45.551850 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:47:45.551855 kernel: PTP clock support registered Feb 9 09:47:45.551860 kernel: PCI: Using ACPI for IRQ routing Feb 9 09:47:45.551865 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 09:47:45.551870 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 09:47:45.551876 kernel: e820: reserve RAM buffer [mem 0x6dfbc000-0x6fffffff] Feb 9 09:47:45.551881 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Feb 9 09:47:45.551886 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Feb 9 09:47:45.551892 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Feb 9 09:47:45.551897 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Feb 9 09:47:45.551902 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 09:47:45.551907 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 9 09:47:45.551913 kernel: clocksource: Switched to clocksource tsc-early Feb 9 09:47:45.551918 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:47:45.551924 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:47:45.551929 kernel: pnp: PnP ACPI init Feb 9 09:47:45.551972 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 09:47:45.552014 kernel: pnp 00:02: [dma 0 disabled] Feb 9 09:47:45.552054 kernel: pnp 00:03: [dma 0 disabled] Feb 9 09:47:45.552095 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 09:47:45.552133 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 09:47:45.552172 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 09:47:45.552212 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 09:47:45.552250 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 09:47:45.552286 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 09:47:45.552322 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 09:47:45.552357 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 09:47:45.552393 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 09:47:45.552429 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 09:47:45.552464 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 09:47:45.552549 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 09:47:45.552586 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 09:47:45.552622 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 09:47:45.552657 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 09:47:45.552693 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 09:47:45.552728 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 09:47:45.552767 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 09:47:45.552806 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 09:47:45.552814 kernel: pnp: PnP ACPI: found 10 devices Feb 9 09:47:45.552819 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 09:47:45.552825 kernel: NET: Registered PF_INET protocol family Feb 9 09:47:45.552830 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:47:45.552835 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 09:47:45.552840 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:47:45.552847 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:47:45.552852 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 09:47:45.552858 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 09:47:45.552863 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 09:47:45.552868 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 09:47:45.552874 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:47:45.552879 kernel: NET: Registered PF_XDP protocol family Feb 9 09:47:45.552920 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Feb 9 09:47:45.552960 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Feb 9 09:47:45.553003 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Feb 9 09:47:45.553043 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 09:47:45.553086 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 09:47:45.553128 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 09:47:45.553171 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 09:47:45.553215 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 09:47:45.553255 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 09:47:45.553296 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 9 09:47:45.553337 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 09:47:45.553377 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 09:47:45.553417 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 09:47:45.553457 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 09:47:45.553523 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 9 09:47:45.553566 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 09:47:45.553608 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 09:47:45.553648 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 9 09:47:45.553690 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 09:47:45.553732 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 09:47:45.553775 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 09:47:45.553817 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 9 09:47:45.553858 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 09:47:45.553901 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 09:47:45.553942 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 9 09:47:45.553980 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 09:47:45.554017 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 09:47:45.554053 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 09:47:45.554091 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 09:47:45.554127 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Feb 9 09:47:45.554163 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 09:47:45.554204 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Feb 9 09:47:45.554246 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 09:47:45.554291 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 9 09:47:45.554329 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Feb 9 09:47:45.554371 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 9 09:47:45.554408 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Feb 9 09:47:45.554450 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 09:47:45.554492 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Feb 9 09:47:45.554534 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 9 09:47:45.554574 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Feb 9 09:47:45.554581 kernel: PCI: CLS 64 bytes, default 64 Feb 9 09:47:45.554586 kernel: DMAR: No ATSR found Feb 9 09:47:45.554592 kernel: DMAR: No SATC found Feb 9 09:47:45.554597 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 9 09:47:45.554604 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 9 09:47:45.554609 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 9 09:47:45.554615 kernel: DMAR: IOMMU feature pasid inconsistent Feb 9 09:47:45.554620 kernel: DMAR: IOMMU feature eafs inconsistent Feb 9 09:47:45.554625 kernel: DMAR: IOMMU feature prs inconsistent Feb 9 09:47:45.554631 kernel: DMAR: IOMMU feature nest inconsistent Feb 9 09:47:45.554636 kernel: DMAR: IOMMU feature mts inconsistent Feb 9 09:47:45.554641 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 9 09:47:45.554647 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 9 09:47:45.554652 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 09:47:45.554658 kernel: DMAR: dmar1: Using Queued invalidation Feb 9 09:47:45.554700 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 09:47:45.554761 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 09:47:45.554802 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 9 09:47:45.554842 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 9 09:47:45.554882 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 9 09:47:45.554923 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 9 09:47:45.554963 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 9 09:47:45.555005 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 9 09:47:45.555045 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 9 09:47:45.555085 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 9 09:47:45.555125 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 9 09:47:45.555165 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 9 09:47:45.555204 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 9 09:47:45.555244 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 9 09:47:45.555285 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 9 09:47:45.555326 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 9 09:47:45.555367 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 9 09:47:45.555407 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 9 09:47:45.555448 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 9 09:47:45.555510 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 9 09:47:45.555567 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 9 09:47:45.555607 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 9 09:47:45.555646 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 9 09:47:45.555691 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 9 09:47:45.555733 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 9 09:47:45.555776 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 09:47:45.555818 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 9 09:47:45.555861 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 9 09:47:45.555904 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 9 09:47:45.555911 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 09:47:45.555917 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 09:47:45.555923 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Feb 9 09:47:45.555929 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 9 09:47:45.555934 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 09:47:45.555939 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 09:47:45.555944 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 09:47:45.555950 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 9 09:47:45.555994 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 09:47:45.556002 kernel: Initialise system trusted keyrings Feb 9 09:47:45.556008 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 09:47:45.556013 kernel: Key type asymmetric registered Feb 9 09:47:45.556018 kernel: Asymmetric key parser 'x509' registered Feb 9 09:47:45.556024 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:47:45.556029 kernel: io scheduler mq-deadline registered Feb 9 09:47:45.556034 kernel: io scheduler kyber registered Feb 9 09:47:45.556039 kernel: io scheduler bfq registered Feb 9 09:47:45.556079 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 9 09:47:45.556120 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 9 09:47:45.556162 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 9 09:47:45.556203 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 9 09:47:45.556244 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 9 09:47:45.556283 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 9 09:47:45.556325 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 9 09:47:45.556369 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 09:47:45.556377 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 09:47:45.556383 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 09:47:45.556388 kernel: pstore: Registered erst as persistent store backend Feb 9 09:47:45.556394 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 09:47:45.556399 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:47:45.556404 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 09:47:45.556409 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 09:47:45.556453 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 09:47:45.556461 kernel: i8042: PNP: No PS/2 controller found. Feb 9 09:47:45.556542 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 09:47:45.556579 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 09:47:45.556617 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T09:47:44 UTC (1707472064) Feb 9 09:47:45.556653 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 09:47:45.556660 kernel: fail to initialize ptp_kvm Feb 9 09:47:45.556666 kernel: intel_pstate: Intel P-state driver initializing Feb 9 09:47:45.556671 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 09:47:45.556676 kernel: intel_pstate: HWP enabled Feb 9 09:47:45.556683 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 09:47:45.556688 kernel: vesafb: scrolling: redraw Feb 9 09:47:45.556693 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 09:47:45.556698 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x000000001219ca07, using 768k, total 768k Feb 9 09:47:45.556703 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:47:45.556709 kernel: fb0: VESA VGA frame buffer device Feb 9 09:47:45.556714 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:47:45.556719 kernel: Segment Routing with IPv6 Feb 9 09:47:45.556724 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:47:45.556730 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:47:45.556736 kernel: Key type dns_resolver registered Feb 9 09:47:45.556741 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 09:47:45.556746 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 09:47:45.556751 kernel: IPI shorthand broadcast: enabled Feb 9 09:47:45.556756 kernel: sched_clock: Marking stable (1849526438, 1360101745)->(4633037707, -1423409524) Feb 9 09:47:45.556762 kernel: registered taskstats version 1 Feb 9 09:47:45.556767 kernel: Loading compiled-in X.509 certificates Feb 9 09:47:45.556772 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 09:47:45.556778 kernel: Key type .fscrypt registered Feb 9 09:47:45.556784 kernel: Key type fscrypt-provisioning registered Feb 9 09:47:45.556789 kernel: pstore: Using crash dump compression: deflate Feb 9 09:47:45.556794 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:47:45.556799 kernel: ima: No architecture policies found Feb 9 09:47:45.556804 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 09:47:45.556809 kernel: Write protecting the kernel read-only data: 28672k Feb 9 09:47:45.556815 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 09:47:45.556820 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 09:47:45.556826 kernel: Run /init as init process Feb 9 09:47:45.556831 kernel: with arguments: Feb 9 09:47:45.556836 kernel: /init Feb 9 09:47:45.556842 kernel: with environment: Feb 9 09:47:45.556847 kernel: HOME=/ Feb 9 09:47:45.556852 kernel: TERM=linux Feb 9 09:47:45.556857 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:47:45.556863 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:47:45.556871 systemd[1]: Detected architecture x86-64. Feb 9 09:47:45.556877 systemd[1]: Running in initrd. Feb 9 09:47:45.556882 systemd[1]: No hostname configured, using default hostname. Feb 9 09:47:45.556887 systemd[1]: Hostname set to . Feb 9 09:47:45.556892 systemd[1]: Initializing machine ID from random generator. Feb 9 09:47:45.556898 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:47:45.556903 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:47:45.556908 systemd[1]: Reached target cryptsetup.target. Feb 9 09:47:45.556915 systemd[1]: Reached target paths.target. Feb 9 09:47:45.556920 systemd[1]: Reached target slices.target. Feb 9 09:47:45.556925 systemd[1]: Reached target swap.target. Feb 9 09:47:45.556930 systemd[1]: Reached target timers.target. Feb 9 09:47:45.556935 systemd[1]: Listening on iscsid.socket. Feb 9 09:47:45.556941 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:47:45.556947 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:47:45.556952 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:47:45.556958 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:47:45.556964 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:47:45.556969 kernel: tsc: Refined TSC clocksource calibration: 3408.017 MHz Feb 9 09:47:45.556974 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:47:45.556979 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fe44c681, max_idle_ns: 440795269197 ns Feb 9 09:47:45.556985 kernel: clocksource: Switched to clocksource tsc Feb 9 09:47:45.556990 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:47:45.556996 systemd[1]: Reached target sockets.target. Feb 9 09:47:45.557002 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:47:45.557007 systemd[1]: Finished network-cleanup.service. Feb 9 09:47:45.557012 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:47:45.557018 systemd[1]: Starting systemd-journald.service... Feb 9 09:47:45.557023 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:47:45.557031 systemd-journald[270]: Journal started Feb 9 09:47:45.557056 systemd-journald[270]: Runtime Journal (/run/log/journal/dc5ba9eb1b7249e4b7aeb412e575867d) is 8.0M, max 639.3M, 631.3M free. Feb 9 09:47:45.559367 systemd-modules-load[271]: Inserted module 'overlay' Feb 9 09:47:45.565000 audit: BPF prog-id=6 op=LOAD Feb 9 09:47:45.583525 kernel: audit: type=1334 audit(1707472065.565:2): prog-id=6 op=LOAD Feb 9 09:47:45.583540 systemd[1]: Starting systemd-resolved.service... Feb 9 09:47:45.632482 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:47:45.632525 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:47:45.664483 kernel: Bridge firewalling registered Feb 9 09:47:45.664498 systemd[1]: Started systemd-journald.service. Feb 9 09:47:45.679196 systemd-modules-load[271]: Inserted module 'br_netfilter' Feb 9 09:47:45.727622 kernel: audit: type=1130 audit(1707472065.687:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.684888 systemd-resolved[273]: Positive Trust Anchors: Feb 9 09:47:45.784346 kernel: SCSI subsystem initialized Feb 9 09:47:45.784356 kernel: audit: type=1130 audit(1707472065.739:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.684894 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:47:45.904546 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:47:45.904581 kernel: audit: type=1130 audit(1707472065.809:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.904597 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:47:45.904618 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:47:45.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.684913 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:47:45.686429 systemd-resolved[273]: Defaulting to hostname 'linux'. Feb 9 09:47:45.687686 systemd[1]: Started systemd-resolved.service. Feb 9 09:47:46.008012 kernel: audit: type=1130 audit(1707472065.964:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.739826 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:47:46.061582 kernel: audit: type=1130 audit(1707472066.016:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.809591 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:47:46.116380 kernel: audit: type=1130 audit(1707472066.069:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:45.907442 systemd-modules-load[271]: Inserted module 'dm_multipath' Feb 9 09:47:45.984849 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:47:46.016773 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:47:46.069784 systemd[1]: Reached target nss-lookup.target. Feb 9 09:47:46.125245 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:47:46.141128 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:47:46.150158 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:47:46.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.150855 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:47:46.152853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:47:46.263091 kernel: audit: type=1130 audit(1707472066.150:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.263106 kernel: audit: type=1130 audit(1707472066.213:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.234830 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:47:46.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.272132 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:47:46.295590 dracut-cmdline[295]: dracut-dracut-053 Feb 9 09:47:46.295590 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 09:47:46.295590 dracut-cmdline[295]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 09:47:46.395572 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:47:46.395585 kernel: iscsi: registered transport (tcp) Feb 9 09:47:46.395593 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:47:46.414150 kernel: QLogic iSCSI HBA Driver Feb 9 09:47:46.431140 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:47:46.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:46.441215 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:47:46.498556 kernel: raid6: avx2x4 gen() 46378 MB/s Feb 9 09:47:46.534515 kernel: raid6: avx2x4 xor() 20912 MB/s Feb 9 09:47:46.569557 kernel: raid6: avx2x2 gen() 53109 MB/s Feb 9 09:47:46.604514 kernel: raid6: avx2x2 xor() 32033 MB/s Feb 9 09:47:46.639561 kernel: raid6: avx2x1 gen() 45145 MB/s Feb 9 09:47:46.674558 kernel: raid6: avx2x1 xor() 27859 MB/s Feb 9 09:47:46.709564 kernel: raid6: sse2x4 gen() 21285 MB/s Feb 9 09:47:46.743555 kernel: raid6: sse2x4 xor() 11947 MB/s Feb 9 09:47:46.777521 kernel: raid6: sse2x2 gen() 21610 MB/s Feb 9 09:47:46.811521 kernel: raid6: sse2x2 xor() 13410 MB/s Feb 9 09:47:46.845514 kernel: raid6: sse2x1 gen() 18247 MB/s Feb 9 09:47:46.897387 kernel: raid6: sse2x1 xor() 8901 MB/s Feb 9 09:47:46.897402 kernel: raid6: using algorithm avx2x2 gen() 53109 MB/s Feb 9 09:47:46.897410 kernel: raid6: .... xor() 32033 MB/s, rmw enabled Feb 9 09:47:46.915627 kernel: raid6: using avx2x2 recovery algorithm Feb 9 09:47:46.961485 kernel: xor: automatically using best checksumming function avx Feb 9 09:47:47.040529 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 09:47:47.045271 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:47:47.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:47.054000 audit: BPF prog-id=7 op=LOAD Feb 9 09:47:47.054000 audit: BPF prog-id=8 op=LOAD Feb 9 09:47:47.055529 systemd[1]: Starting systemd-udevd.service... Feb 9 09:47:47.063492 systemd-udevd[474]: Using default interface naming scheme 'v252'. Feb 9 09:47:47.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:47.069724 systemd[1]: Started systemd-udevd.service. Feb 9 09:47:47.110603 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Feb 9 09:47:47.086131 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:47:47.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:47.113125 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:47:47.127663 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:47:47.178328 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:47:47.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:47.205554 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:47:47.207486 kernel: libata version 3.00 loaded. Feb 9 09:47:47.242716 kernel: ACPI: bus type USB registered Feb 9 09:47:47.242758 kernel: usbcore: registered new interface driver usbfs Feb 9 09:47:47.242769 kernel: usbcore: registered new interface driver hub Feb 9 09:47:47.278109 kernel: usbcore: registered new device driver usb Feb 9 09:47:47.278485 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 09:47:47.311763 kernel: AES CTR mode by8 optimization enabled Feb 9 09:47:47.312484 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 09:47:47.346063 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 09:47:47.347487 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 09:47:47.388334 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 9 09:47:47.388419 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 09:47:47.388484 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Feb 9 09:47:47.410484 kernel: pps pps0: new PPS source ptp0 Feb 9 09:47:47.410568 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 09:47:47.410636 kernel: scsi host0: ahci Feb 9 09:47:47.410718 kernel: scsi host1: ahci Feb 9 09:47:47.410786 kernel: scsi host2: ahci Feb 9 09:47:47.410847 kernel: scsi host3: ahci Feb 9 09:47:47.410917 kernel: scsi host4: ahci Feb 9 09:47:47.410985 kernel: scsi host5: ahci Feb 9 09:47:47.411044 kernel: scsi host6: ahci Feb 9 09:47:47.411104 kernel: scsi host7: ahci Feb 9 09:47:47.411483 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 134 Feb 9 09:47:47.411498 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 134 Feb 9 09:47:47.411507 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 134 Feb 9 09:47:47.411515 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 134 Feb 9 09:47:47.411523 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 134 Feb 9 09:47:47.411530 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 134 Feb 9 09:47:47.411539 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 134 Feb 9 09:47:47.411549 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 134 Feb 9 09:47:47.439495 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 9 09:47:47.551695 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 09:47:47.551766 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 09:47:47.551821 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 09:47:47.566157 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:2c Feb 9 09:47:47.607004 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 09:47:47.607083 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 9 09:47:47.620072 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 09:47:47.620147 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 09:47:47.679468 kernel: pps pps1: new PPS source ptp1 Feb 9 09:47:47.679543 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 09:47:47.679599 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 9 09:47:47.701509 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 09:47:47.701578 kernel: hub 1-0:1.0: USB hub found Feb 9 09:47:47.701654 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 09:47:47.717523 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 09:47:47.717594 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 09:47:47.717648 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 09:47:47.717656 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 09:47:47.717662 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 09:47:47.718522 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 09:47:47.722019 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 09:47:47.722037 kernel: ata1.00: Features: NCQ-prio Feb 9 09:47:47.722044 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 09:47:47.722052 kernel: ata2.00: Features: NCQ-prio Feb 9 09:47:47.722060 kernel: hub 1-0:1.0: 16 ports detected Feb 9 09:47:47.725519 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 09:47:47.725534 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 9 09:47:47.725541 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 09:47:47.725548 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 09:47:47.725554 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 09:47:47.727483 kernel: ata1.00: configured for UDMA/133 Feb 9 09:47:47.727499 kernel: ata2.00: configured for UDMA/133 Feb 9 09:47:47.727506 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 09:47:47.727579 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 09:47:47.728482 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 09:47:47.733812 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:2d Feb 9 09:47:47.771765 kernel: hub 2-0:1.0: USB hub found Feb 9 09:47:47.771862 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 9 09:47:47.771921 kernel: hub 2-0:1.0: 10 ports detected Feb 9 09:47:47.786622 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 09:47:47.813508 kernel: usb: port power management may be unreliable Feb 9 09:47:47.918526 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 09:47:47.918614 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 9 09:47:48.039494 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 09:47:48.182537 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:47:48.197813 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 9 09:47:48.197885 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:47:48.197893 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 09:47:48.197965 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 09:47:48.198036 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:47:48.198090 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:47:48.198143 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 09:47:48.198197 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 09:47:48.198253 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:47:48.199536 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:47:48.199552 kernel: GPT:9289727 != 937703087 Feb 9 09:47:48.199560 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:47:48.199566 kernel: GPT:9289727 != 937703087 Feb 9 09:47:48.199572 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:47:48.199578 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:47:48.200539 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:47:48.200555 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:47:48.336494 kernel: hub 1-14:1.0: USB hub found Feb 9 09:47:48.336580 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 9 09:47:48.336648 kernel: hub 1-14:1.0: 4 ports detected Feb 9 09:47:48.358487 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 9 09:47:48.582735 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 09:47:48.582809 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 09:47:48.596074 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:47:48.609431 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:47:48.609448 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Feb 9 09:47:48.609561 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 9 09:47:48.637212 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 09:47:48.680963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:47:48.721631 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 09:47:48.721656 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (645) Feb 9 09:47:48.705114 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:47:48.739352 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:47:48.761288 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:47:48.774520 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:47:48.796911 systemd[1]: Starting disk-uuid.service... Feb 9 09:47:48.826582 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:47:48.826594 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:47:48.826647 disk-uuid[676]: Primary Header is updated. Feb 9 09:47:48.826647 disk-uuid[676]: Secondary Entries is updated. Feb 9 09:47:48.826647 disk-uuid[676]: Secondary Header is updated. Feb 9 09:47:48.950621 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:47:48.950636 kernel: usbcore: registered new interface driver usbhid Feb 9 09:47:48.950643 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:47:48.950650 kernel: usbhid: USB HID core driver Feb 9 09:47:48.950659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:47:48.950665 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:47:48.950672 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:47:48.950678 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 09:47:48.993564 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 09:47:48.993591 kernel: port_module: 9 callbacks suppressed Feb 9 09:47:48.993599 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 9 09:47:49.056487 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 09:47:49.056589 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 09:47:49.121562 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 09:47:49.121593 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 09:47:49.271598 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 09:47:49.312510 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Feb 9 09:47:49.338552 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Feb 9 09:47:49.925596 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:47:49.945154 disk-uuid[677]: The operation has completed successfully. Feb 9 09:47:49.953570 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:47:49.979377 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:47:50.077382 kernel: audit: type=1130 audit(1707472069.987:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.077397 kernel: audit: type=1131 audit(1707472069.987:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:49.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:49.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:49.979418 systemd[1]: Finished disk-uuid.service. Feb 9 09:47:50.106585 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 09:47:49.996576 systemd[1]: Starting verity-setup.service... Feb 9 09:47:50.153169 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:47:50.165752 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:47:50.178112 systemd[1]: Finished verity-setup.service. Feb 9 09:47:50.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.244539 kernel: audit: type=1130 audit(1707472070.197:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.273028 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:47:50.286592 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:47:50.279786 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:47:50.280183 systemd[1]: Starting ignition-setup.service... Feb 9 09:47:50.370873 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 09:47:50.370887 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:47:50.370895 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:47:50.370901 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 09:47:50.310920 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:47:50.378926 systemd[1]: Finished ignition-setup.service. Feb 9 09:47:50.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.386881 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:47:50.494564 kernel: audit: type=1130 audit(1707472070.386:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.494580 kernel: audit: type=1130 audit(1707472070.444:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.445172 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:47:50.527221 kernel: audit: type=1334 audit(1707472070.503:24): prog-id=9 op=LOAD Feb 9 09:47:50.503000 audit: BPF prog-id=9 op=LOAD Feb 9 09:47:50.505157 systemd[1]: Starting systemd-networkd.service... Feb 9 09:47:50.542835 systemd-networkd[879]: lo: Link UP Feb 9 09:47:50.542837 systemd-networkd[879]: lo: Gained carrier Feb 9 09:47:50.614733 kernel: audit: type=1130 audit(1707472070.557:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.573752 ignition[868]: Ignition 2.14.0 Feb 9 09:47:50.543124 systemd-networkd[879]: Enumeration completed Feb 9 09:47:50.573757 ignition[868]: Stage: fetch-offline Feb 9 09:47:50.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.543161 systemd[1]: Started systemd-networkd.service. Feb 9 09:47:50.791653 kernel: audit: type=1130 audit(1707472070.643:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.791669 kernel: audit: type=1130 audit(1707472070.703:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.791676 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 09:47:50.791758 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Feb 9 09:47:50.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.573782 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:50.543852 systemd-networkd[879]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:47:50.573795 ignition[868]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:47:50.557652 systemd[1]: Reached target network.target. Feb 9 09:47:50.576329 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:47:50.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.588788 unknown[868]: fetched base config from "system" Feb 9 09:47:50.879620 iscsid[903]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:47:50.879620 iscsid[903]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 09:47:50.879620 iscsid[903]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:47:50.879620 iscsid[903]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:47:50.879620 iscsid[903]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:47:50.879620 iscsid[903]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:47:50.879620 iscsid[903]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:47:51.043682 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 09:47:50.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:50.576398 ignition[868]: parsed url from cmdline: "" Feb 9 09:47:50.588792 unknown[868]: fetched user config from "system" Feb 9 09:47:50.576400 ignition[868]: no config URL provided Feb 9 09:47:50.625069 systemd[1]: Starting iscsiuio.service... Feb 9 09:47:50.576403 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:47:50.630825 systemd[1]: Started iscsiuio.service. Feb 9 09:47:50.579784 ignition[868]: parsing config with SHA512: b8cb51494e915f27002618cadc27337acd2aaaef26049de474336e80ca6fdcae6794655959983ea8a0bd99d63c5f7549e88c1aa4f4b200429991b12049d7268e Feb 9 09:47:50.643873 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:47:50.589081 ignition[868]: fetch-offline: fetch-offline passed Feb 9 09:47:50.703805 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 09:47:50.589084 ignition[868]: POST message to Packet Timeline Feb 9 09:47:50.704565 systemd[1]: Starting ignition-kargs.service... Feb 9 09:47:50.589089 ignition[868]: POST Status error: resource requires networking Feb 9 09:47:50.779394 systemd-networkd[879]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:47:50.589123 ignition[868]: Ignition finished successfully Feb 9 09:47:50.818176 systemd[1]: Starting iscsid.service... Feb 9 09:47:50.781753 ignition[893]: Ignition 2.14.0 Feb 9 09:47:50.838990 systemd[1]: Started iscsid.service. Feb 9 09:47:50.781756 ignition[893]: Stage: kargs Feb 9 09:47:50.853983 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:47:50.781810 ignition[893]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:50.871668 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:47:50.781819 ignition[893]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:47:50.888688 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:47:50.783123 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:47:50.899755 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:47:50.784683 ignition[893]: kargs: kargs passed Feb 9 09:47:50.942748 systemd[1]: Reached target remote-fs.target. Feb 9 09:47:50.784686 ignition[893]: POST message to Packet Timeline Feb 9 09:47:50.964289 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:47:50.784696 ignition[893]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:47:50.978939 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:47:50.788791 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53677->[::1]:53: read: connection refused Feb 9 09:47:51.034699 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:47:50.989320 ignition[893]: GET https://metadata.packet.net/metadata: attempt #2 Feb 9 09:47:51.064179 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:47:50.989748 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43375->[::1]:53: read: connection refused Feb 9 09:47:51.094368 systemd-networkd[879]: enp2s0f1np1: Link UP Feb 9 09:47:51.094807 systemd-networkd[879]: enp2s0f1np1: Gained carrier Feb 9 09:47:51.108974 systemd-networkd[879]: enp2s0f0np0: Link UP Feb 9 09:47:51.109327 systemd-networkd[879]: eno2: Link UP Feb 9 09:47:51.109686 systemd-networkd[879]: eno1: Link UP Feb 9 09:47:51.390230 ignition[893]: GET https://metadata.packet.net/metadata: attempt #3 Feb 9 09:47:51.391332 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33589->[::1]:53: read: connection refused Feb 9 09:47:51.823168 systemd-networkd[879]: enp2s0f0np0: Gained carrier Feb 9 09:47:51.832720 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Feb 9 09:47:51.852665 systemd-networkd[879]: enp2s0f0np0: DHCPv4 address 139.178.94.23/31, gateway 139.178.94.22 acquired from 145.40.83.140 Feb 9 09:47:52.191771 ignition[893]: GET https://metadata.packet.net/metadata: attempt #4 Feb 9 09:47:52.193022 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56631->[::1]:53: read: connection refused Feb 9 09:47:53.079942 systemd-networkd[879]: enp2s0f1np1: Gained IPv6LL Feb 9 09:47:53.335942 systemd-networkd[879]: enp2s0f0np0: Gained IPv6LL Feb 9 09:47:53.794776 ignition[893]: GET https://metadata.packet.net/metadata: attempt #5 Feb 9 09:47:53.796136 ignition[893]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54003->[::1]:53: read: connection refused Feb 9 09:47:56.998652 ignition[893]: GET https://metadata.packet.net/metadata: attempt #6 Feb 9 09:47:57.039209 ignition[893]: GET result: OK Feb 9 09:47:57.278460 ignition[893]: Ignition finished successfully Feb 9 09:47:57.281024 systemd[1]: Finished ignition-kargs.service. Feb 9 09:47:57.370385 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 9 09:47:57.370405 kernel: audit: type=1130 audit(1707472077.293:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:57.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:57.302464 ignition[921]: Ignition 2.14.0 Feb 9 09:47:57.296179 systemd[1]: Starting ignition-disks.service... Feb 9 09:47:57.302467 ignition[921]: Stage: disks Feb 9 09:47:57.302588 ignition[921]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:57.302597 ignition[921]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:47:57.303912 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:47:57.305337 ignition[921]: disks: disks passed Feb 9 09:47:57.305340 ignition[921]: POST message to Packet Timeline Feb 9 09:47:57.305350 ignition[921]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:47:57.328119 ignition[921]: GET result: OK Feb 9 09:47:57.513610 ignition[921]: Ignition finished successfully Feb 9 09:47:57.516913 systemd[1]: Finished ignition-disks.service. Feb 9 09:47:57.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:57.530080 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:47:57.608751 kernel: audit: type=1130 audit(1707472077.529:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:57.594703 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:47:57.594811 systemd[1]: Reached target local-fs.target. Feb 9 09:47:57.617725 systemd[1]: Reached target sysinit.target. Feb 9 09:47:57.632694 systemd[1]: Reached target basic.target. Feb 9 09:47:57.646418 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:47:57.675241 systemd-fsck[936]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 09:47:57.687024 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:47:57.780074 kernel: audit: type=1130 audit(1707472077.695:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:57.780089 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:47:57.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:57.700972 systemd[1]: Mounting sysroot.mount... Feb 9 09:47:57.787208 systemd[1]: Mounted sysroot.mount. Feb 9 09:47:57.800814 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:47:57.808417 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:47:57.822383 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 09:47:57.842304 systemd[1]: Starting flatcar-static-network.service... Feb 9 09:47:57.856796 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:47:57.856884 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:47:57.875432 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:47:57.898546 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:47:58.045155 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (947) Feb 9 09:47:58.045172 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 09:47:58.045180 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:47:58.045187 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:47:58.045194 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 09:47:58.045256 coreos-metadata[944]: Feb 09 09:47:57.982 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:47:58.045256 coreos-metadata[944]: Feb 09 09:47:58.020 INFO Fetch successful Feb 9 09:47:58.170432 kernel: audit: type=1130 audit(1707472078.053:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.170446 kernel: audit: type=1130 audit(1707472078.115:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.170504 coreos-metadata[943]: Feb 09 09:47:57.982 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:47:58.170504 coreos-metadata[943]: Feb 09 09:47:58.004 INFO Fetch successful Feb 9 09:47:58.170504 coreos-metadata[943]: Feb 09 09:47:58.022 INFO wrote hostname ci-3510.3.2-a-c37e5c1643 to /sysroot/etc/hostname Feb 9 09:47:58.308770 kernel: audit: type=1130 audit(1707472078.178:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.308782 kernel: audit: type=1131 audit(1707472078.178:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:57.911507 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:47:57.973072 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:47:58.351551 initrd-setup-root[954]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:47:58.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.054841 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 09:47:58.425725 kernel: audit: type=1130 audit(1707472078.359:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.425745 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:47:58.115795 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 9 09:47:58.445694 initrd-setup-root[970]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:47:58.455705 ignition[1017]: INFO : Ignition 2.14.0 Feb 9 09:47:58.455705 ignition[1017]: INFO : Stage: mount Feb 9 09:47:58.455705 ignition[1017]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:58.455705 ignition[1017]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:47:58.455705 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:47:58.455705 ignition[1017]: INFO : mount: mount passed Feb 9 09:47:58.455705 ignition[1017]: INFO : POST message to Packet Timeline Feb 9 09:47:58.455705 ignition[1017]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:47:58.455705 ignition[1017]: INFO : GET result: OK Feb 9 09:47:58.115833 systemd[1]: Finished flatcar-static-network.service. Feb 9 09:47:58.552753 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:47:58.178754 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:47:58.300101 systemd[1]: Starting ignition-mount.service... Feb 9 09:47:58.328081 systemd[1]: Starting sysroot-boot.service... Feb 9 09:47:58.344008 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:47:58.344047 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:47:58.350961 systemd[1]: Finished sysroot-boot.service. Feb 9 09:47:58.778912 ignition[1017]: INFO : Ignition finished successfully Feb 9 09:47:58.781635 systemd[1]: Finished ignition-mount.service. Feb 9 09:47:58.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.798672 systemd[1]: Starting ignition-files.service... Feb 9 09:47:58.869755 kernel: audit: type=1130 audit(1707472078.796:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:58.863311 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:47:58.927265 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1036) Feb 9 09:47:58.927280 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 09:47:58.927288 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:47:58.950433 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:47:58.999484 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 09:47:59.000791 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:47:59.019757 ignition[1055]: INFO : Ignition 2.14.0 Feb 9 09:47:59.019757 ignition[1055]: INFO : Stage: files Feb 9 09:47:59.019757 ignition[1055]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:59.019757 ignition[1055]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:47:59.019757 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:47:59.019757 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:47:59.019757 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:47:59.019757 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:47:59.026386 unknown[1055]: wrote ssh authorized keys file for user: core Feb 9 09:47:59.121696 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:47:59.121696 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:47:59.121696 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:47:59.121696 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:47:59.121696 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:47:59.121696 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 09:47:59.121696 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 09:47:59.473741 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:47:59.554144 ignition[1055]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 09:47:59.554144 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 09:47:59.597710 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 09:47:59.597710 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 09:47:59.952000 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:47:59.998405 ignition[1055]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 09:47:59.998405 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 09:48:00.041698 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:48:00.041698 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 09:48:00.073629 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:48:00.240407 ignition[1055]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 09:48:00.240407 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:48:00.281699 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:48:00.281699 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 09:48:00.313592 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:48:00.693086 ignition[1055]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 09:48:00.693086 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:48:00.743689 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1063) Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem325649865" Feb 9 09:48:00.743703 ignition[1055]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem325649865": device or resource busy Feb 9 09:48:00.743703 ignition[1055]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem325649865", trying btrfs: device or resource busy Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem325649865" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem325649865" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem325649865" Feb 9 09:48:00.743703 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem325649865" Feb 9 09:48:01.055773 kernel: audit: type=1130 audit(1707472080.972:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:00.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:00.959625 systemd[1]: Finished ignition-files.service. Feb 9 09:48:01.070712 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(10): [started] processing unit "packet-phone-home.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(10): [finished] processing unit "packet-phone-home.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(11): [started] processing unit "containerd.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(11): op(12): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(11): [finished] processing unit "containerd.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Feb 9 09:48:01.070712 ignition[1055]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:48:01.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:00.979443 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:48:01.482804 ignition[1055]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:48:01.482804 ignition[1055]: INFO : files: op(18): [started] setting preset to enabled for "packet-phone-home.service" Feb 9 09:48:01.482804 ignition[1055]: INFO : files: op(18): [finished] setting preset to enabled for "packet-phone-home.service" Feb 9 09:48:01.482804 ignition[1055]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:48:01.482804 ignition[1055]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:48:01.482804 ignition[1055]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:48:01.482804 ignition[1055]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:48:01.482804 ignition[1055]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:48:01.482804 ignition[1055]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:48:01.482804 ignition[1055]: INFO : files: files passed Feb 9 09:48:01.482804 ignition[1055]: INFO : POST message to Packet Timeline Feb 9 09:48:01.482804 ignition[1055]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:48:01.482804 ignition[1055]: INFO : GET result: OK Feb 9 09:48:01.482804 ignition[1055]: INFO : Ignition finished successfully Feb 9 09:48:01.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.769292 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:48:01.041733 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:48:01.042072 systemd[1]: Starting ignition-quench.service... Feb 9 09:48:01.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.063897 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:48:01.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.080903 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:48:01.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.080977 systemd[1]: Finished ignition-quench.service. Feb 9 09:48:01.105834 systemd[1]: Reached target ignition-complete.target. Feb 9 09:48:01.128354 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:48:01.910657 ignition[1101]: INFO : Ignition 2.14.0 Feb 9 09:48:01.910657 ignition[1101]: INFO : Stage: umount Feb 9 09:48:01.910657 ignition[1101]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:48:01.910657 ignition[1101]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:48:01.910657 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:48:01.910657 ignition[1101]: INFO : umount: umount passed Feb 9 09:48:01.910657 ignition[1101]: INFO : POST message to Packet Timeline Feb 9 09:48:01.910657 ignition[1101]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:48:01.910657 ignition[1101]: INFO : GET result: OK Feb 9 09:48:01.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.145858 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:48:02.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.145910 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:48:02.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.174767 systemd[1]: Reached target initrd-fs.target. Feb 9 09:48:02.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.102000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:48:01.193736 systemd[1]: Reached target initrd.target. Feb 9 09:48:01.221955 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:48:02.141781 ignition[1101]: INFO : Ignition finished successfully Feb 9 09:48:02.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.224204 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:48:02.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.257593 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:48:02.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.268624 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:48:02.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.301941 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:48:02.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.312102 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:48:02.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.340171 systemd[1]: Stopped target timers.target. Feb 9 09:48:02.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.361225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:48:02.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.361618 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:48:01.381393 systemd[1]: Stopped target initrd.target. Feb 9 09:48:02.377214 kernel: kauditd_printk_skb: 30 callbacks suppressed Feb 9 09:48:02.377229 kernel: audit: type=1131 audit(1707472082.296:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.405097 systemd[1]: Stopped target basic.target. Feb 9 09:48:01.430108 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:48:01.450236 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:48:01.473102 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:48:02.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.492121 systemd[1]: Stopped target remote-fs.target. Feb 9 09:48:02.538904 kernel: audit: type=1131 audit(1707472082.415:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.538916 kernel: audit: type=1131 audit(1707472082.482:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.515229 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:48:02.604292 kernel: audit: type=1131 audit(1707472082.546:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.537125 systemd[1]: Stopped target sysinit.target. Feb 9 09:48:01.558140 systemd[1]: Stopped target local-fs.target. Feb 9 09:48:02.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.581093 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:48:02.751953 kernel: audit: type=1131 audit(1707472082.627:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.751966 kernel: audit: type=1131 audit(1707472082.694:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.604086 systemd[1]: Stopped target swap.target. Feb 9 09:48:02.818263 kernel: audit: type=1131 audit(1707472082.759:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.623996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:48:02.855568 kernel: audit: type=1130 audit(1707472082.826:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.624361 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:48:02.951581 kernel: audit: type=1131 audit(1707472082.826:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:02.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.646457 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:48:01.670981 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:48:01.671338 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:48:01.696241 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:48:01.696612 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:48:01.711318 systemd[1]: Stopped target paths.target. Feb 9 09:48:02.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.726985 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:48:03.076733 kernel: audit: type=1131 audit(1707472082.999:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:01.731706 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:48:03.085000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:48:03.085000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:48:03.086000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:48:03.087000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:48:03.087000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:48:01.746107 systemd[1]: Stopped target slices.target. Feb 9 09:48:01.760088 systemd[1]: Stopped target sockets.target. Feb 9 09:48:01.778075 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:48:03.131488 systemd-journald[270]: Received SIGTERM from PID 1 (n/a). Feb 9 09:48:03.131508 iscsid[903]: iscsid shutting down. Feb 9 09:48:01.778328 systemd[1]: Closed iscsid.socket. Feb 9 09:48:01.799168 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:48:01.799553 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:48:01.826256 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:48:01.826619 systemd[1]: Stopped ignition-files.service. Feb 9 09:48:01.841159 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 09:48:01.841529 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 09:48:01.860211 systemd[1]: Stopping ignition-mount.service... Feb 9 09:48:01.872862 systemd[1]: Stopping iscsiuio.service... Feb 9 09:48:01.887182 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:48:01.902692 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:48:01.902821 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:48:01.919897 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:48:01.920064 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:48:01.933819 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:48:01.935826 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:48:01.936074 systemd[1]: Stopped iscsiuio.service. Feb 9 09:48:01.947214 systemd[1]: Stopped target network.target. Feb 9 09:48:01.964799 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:48:01.964890 systemd[1]: Closed iscsiuio.socket. Feb 9 09:48:01.991982 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:48:01.997539 systemd-networkd[879]: enp2s0f1np1: DHCPv6 lease lost Feb 9 09:48:02.009957 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:48:02.011641 systemd-networkd[879]: enp2s0f0np0: DHCPv6 lease lost Feb 9 09:48:03.131000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:48:02.025524 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:48:02.025766 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:48:02.051676 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:48:02.051740 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:48:02.069651 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:48:02.069698 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:48:02.086313 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:48:02.086527 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:48:02.102690 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:48:02.102771 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:48:02.117228 systemd[1]: Stopping network-cleanup.service... Feb 9 09:48:02.129680 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:48:02.129814 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:48:02.149841 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:48:02.149958 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:48:02.167231 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:48:02.167375 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:48:02.187655 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:48:02.188220 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:48:02.188268 systemd[1]: Stopped ignition-mount.service. Feb 9 09:48:02.201236 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:48:02.201276 systemd[1]: Stopped ignition-disks.service. Feb 9 09:48:02.216825 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:48:02.216874 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:48:02.233696 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:48:02.233799 systemd[1]: Stopped ignition-setup.service. Feb 9 09:48:02.251828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:48:02.251879 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:48:02.266892 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:48:02.282126 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:48:02.282340 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:48:02.297823 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:48:02.297923 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:48:02.385591 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:48:02.385609 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:48:02.407564 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:48:02.407588 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:48:02.415752 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:48:02.415774 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:48:02.482579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:48:02.482601 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:48:02.546941 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:48:02.612707 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:48:02.612736 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:48:02.627663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:48:02.627686 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:48:02.694678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:48:02.694701 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:48:02.760123 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:48:02.760352 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:48:02.760390 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:48:02.981960 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:48:02.982004 systemd[1]: Stopped network-cleanup.service. Feb 9 09:48:02.999718 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:48:03.067012 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:48:03.084806 systemd[1]: Switching root. Feb 9 09:48:03.133259 systemd-journald[270]: Journal stopped Feb 9 09:48:06.833999 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:48:06.834015 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:48:06.834023 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:48:06.834028 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:48:06.834033 kernel: SELinux: policy capability open_perms=1 Feb 9 09:48:06.834038 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:48:06.834044 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:48:06.834051 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:48:06.834057 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:48:06.834062 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:48:06.834067 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:48:06.834073 systemd[1]: Successfully loaded SELinux policy in 322.468ms. Feb 9 09:48:06.834079 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.500ms. Feb 9 09:48:06.834086 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:48:06.834094 systemd[1]: Detected architecture x86-64. Feb 9 09:48:06.834100 systemd[1]: Detected first boot. Feb 9 09:48:06.834106 systemd[1]: Hostname set to . Feb 9 09:48:06.834112 systemd[1]: Initializing machine ID from random generator. Feb 9 09:48:06.834118 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:48:06.834124 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:48:06.834131 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:48:06.834137 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:48:06.834144 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:48:06.834150 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:48:06.834156 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:48:06.834162 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:48:06.834170 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 09:48:06.834176 systemd[1]: Created slice system-getty.slice. Feb 9 09:48:06.834182 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:48:06.834188 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:48:06.834195 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:48:06.834201 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:48:06.834207 systemd[1]: Created slice user.slice. Feb 9 09:48:06.834213 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:48:06.834220 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:48:06.834226 systemd[1]: Set up automount boot.automount. Feb 9 09:48:06.834232 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:48:06.834237 systemd[1]: Reached target integritysetup.target. Feb 9 09:48:06.834243 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:48:06.834249 systemd[1]: Reached target remote-fs.target. Feb 9 09:48:06.834257 systemd[1]: Reached target slices.target. Feb 9 09:48:06.834263 systemd[1]: Reached target swap.target. Feb 9 09:48:06.834270 systemd[1]: Reached target torcx.target. Feb 9 09:48:06.834277 systemd[1]: Reached target veritysetup.target. Feb 9 09:48:06.834283 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:48:06.834289 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:48:06.834295 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:48:06.834302 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:48:06.834308 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:48:06.834314 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:48:06.834321 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:48:06.834328 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:48:06.834335 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:48:06.834341 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:48:06.834347 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:48:06.834353 systemd[1]: Mounting media.mount... Feb 9 09:48:06.834361 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 09:48:06.834367 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:48:06.834374 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:48:06.834380 systemd[1]: Mounting tmp.mount... Feb 9 09:48:06.834386 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:48:06.834393 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:48:06.834399 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:48:06.834405 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:48:06.834413 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:48:06.834420 systemd[1]: Starting modprobe@drm.service... Feb 9 09:48:06.834426 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:48:06.834432 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:48:06.834439 kernel: fuse: init (API version 7.34) Feb 9 09:48:06.834445 systemd[1]: Starting modprobe@loop.service... Feb 9 09:48:06.834451 kernel: loop: module loaded Feb 9 09:48:06.834457 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:48:06.834465 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 09:48:06.834471 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 09:48:06.834477 systemd[1]: Starting systemd-journald.service... Feb 9 09:48:06.834488 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:48:06.834497 systemd-journald[1296]: Journal started Feb 9 09:48:06.834560 systemd-journald[1296]: Runtime Journal (/run/log/journal/79a51def39db4ec5a7964d296478aa3e) is 8.0M, max 639.3M, 631.3M free. Feb 9 09:48:06.218000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:48:06.218000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 09:48:06.831000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:48:06.831000 audit[1296]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff04c47c60 a2=4000 a3=7fff04c47cfc items=0 ppid=1 pid=1296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:06.831000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:48:06.866673 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:48:06.888667 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:48:06.909522 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:48:06.944525 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 09:48:06.959675 systemd[1]: Started systemd-journald.service. Feb 9 09:48:06.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:06.969229 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:48:06.977742 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:48:06.984734 systemd[1]: Mounted media.mount. Feb 9 09:48:06.991734 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:48:07.000716 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:48:07.009713 systemd[1]: Mounted tmp.mount. Feb 9 09:48:07.016839 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:48:07.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.025850 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:48:07.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.034879 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:48:07.035027 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:48:07.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.043919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:48:07.044106 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:48:07.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.053019 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:48:07.053241 systemd[1]: Finished modprobe@drm.service. Feb 9 09:48:07.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.062332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:48:07.062720 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:48:07.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.071309 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:48:07.071695 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:48:07.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.080292 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:48:07.080678 systemd[1]: Finished modprobe@loop.service. Feb 9 09:48:07.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.088883 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:48:07.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.098847 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:48:07.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.107843 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:48:07.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.116874 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:48:07.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.126914 systemd[1]: Reached target network-pre.target. Feb 9 09:48:07.137912 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:48:07.148070 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:48:07.154676 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:48:07.158027 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:48:07.165445 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:48:07.168317 systemd-journald[1296]: Time spent on flushing to /var/log/journal/79a51def39db4ec5a7964d296478aa3e is 14.585ms for 1558 entries. Feb 9 09:48:07.168317 systemd-journald[1296]: System Journal (/var/log/journal/79a51def39db4ec5a7964d296478aa3e) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:48:07.203369 systemd-journald[1296]: Received client request to flush runtime journal. Feb 9 09:48:07.181602 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:48:07.182120 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:48:07.193621 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:48:07.194167 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:48:07.201096 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:48:07.208156 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:48:07.215839 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:48:07.223658 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:48:07.231729 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:48:07.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.239745 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:48:07.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.247711 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:48:07.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.255702 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:48:07.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.264655 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:48:07.273284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:48:07.282722 udevadm[1323]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 09:48:07.292820 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:48:07.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.314557 kernel: kauditd_printk_skb: 46 callbacks suppressed Feb 9 09:48:07.314583 kernel: audit: type=1130 audit(1707472087.300:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.465358 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:48:07.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.474406 systemd[1]: Starting systemd-udevd.service... Feb 9 09:48:07.517525 kernel: audit: type=1130 audit(1707472087.473:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.528455 systemd-udevd[1330]: Using default interface naming scheme 'v252'. Feb 9 09:48:07.548007 systemd[1]: Started systemd-udevd.service. Feb 9 09:48:07.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.560647 systemd[1]: Found device dev-ttyS1.device. Feb 9 09:48:07.601487 kernel: audit: type=1130 audit(1707472087.556:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:07.619400 systemd[1]: Starting systemd-networkd.service... Feb 9 09:48:07.621489 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:48:07.611000 audit[1378]: AVC avc: denied { confidentiality } for pid=1378 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:48:07.685486 kernel: audit: type=1400 audit(1707472087.611:121): avc: denied { confidentiality } for pid=1378 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:48:07.685521 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 09:48:07.687312 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:48:07.725105 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 09:48:07.725174 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1384) Feb 9 09:48:07.725486 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 09:48:07.758296 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 09:48:07.771489 kernel: audit: type=1300 audit(1707472087.611:121): arch=c000003e syscall=175 success=yes exit=0 a0=5606e6a86e30 a1=4d8bc a2=7f1c643d5bc5 a3=5 items=42 ppid=1330 pid=1378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:07.611000 audit[1378]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5606e6a86e30 a1=4d8bc a2=7f1c643d5bc5 a3=5 items=42 ppid=1330 pid=1378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:07.611000 audit: CWD cwd="/" Feb 9 09:48:07.865930 kernel: ACPI: button: Power Button [PWRF] Feb 9 09:48:07.865980 kernel: audit: type=1307 audit(1707472087.611:121): cwd="/" Feb 9 09:48:07.611000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.937696 kernel: audit: type=1302 audit(1707472087.611:121): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.937735 kernel: audit: type=1302 audit(1707472087.611:121): item=1 name=(null) inode=20590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.937759 kernel: audit: type=1302 audit(1707472087.611:121): item=2 name=(null) inode=20590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.937779 kernel: audit: type=1302 audit(1707472087.611:121): item=3 name=(null) inode=20591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.937796 kernel: IPMI message handler: version 39.2 Feb 9 09:48:07.611000 audit: PATH item=1 name=(null) inode=20590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=2 name=(null) inode=20590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=3 name=(null) inode=20591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=4 name=(null) inode=20590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=5 name=(null) inode=20592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=6 name=(null) inode=20590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=7 name=(null) inode=20593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=8 name=(null) inode=20593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=9 name=(null) inode=20594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=10 name=(null) inode=20593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=11 name=(null) inode=20595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=12 name=(null) inode=20593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=13 name=(null) inode=20596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=14 name=(null) inode=20593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=15 name=(null) inode=20597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=16 name=(null) inode=20593 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=17 name=(null) inode=20598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=18 name=(null) inode=20590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=19 name=(null) inode=20599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=20 name=(null) inode=20599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=21 name=(null) inode=20600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=22 name=(null) inode=20599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=23 name=(null) inode=20601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=24 name=(null) inode=20599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=25 name=(null) inode=20602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=26 name=(null) inode=20599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=27 name=(null) inode=20603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=28 name=(null) inode=20599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=29 name=(null) inode=20604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=30 name=(null) inode=20590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=31 name=(null) inode=20605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=32 name=(null) inode=20605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=33 name=(null) inode=20606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=34 name=(null) inode=20605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=35 name=(null) inode=20607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=36 name=(null) inode=20605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=37 name=(null) inode=20608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=38 name=(null) inode=20605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=39 name=(null) inode=20609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=40 name=(null) inode=20605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PATH item=41 name=(null) inode=20610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:48:07.611000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:48:08.166397 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 09:48:08.166517 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 09:48:08.190492 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 09:48:08.198531 systemd[1]: Started systemd-userdbd.service. Feb 9 09:48:08.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:08.207523 kernel: ipmi device interface Feb 9 09:48:08.207555 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 09:48:08.207639 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 09:48:08.295486 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 09:48:08.365893 kernel: ipmi_si: IPMI System Interface driver Feb 9 09:48:08.366019 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 09:48:08.366315 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 09:48:08.366337 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 09:48:08.410495 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 09:48:08.410751 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 09:48:08.461486 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 9 09:48:08.505638 systemd-networkd[1407]: bond0: netdev ready Feb 9 09:48:08.508145 systemd-networkd[1407]: lo: Link UP Feb 9 09:48:08.508148 systemd-networkd[1407]: lo: Gained carrier Feb 9 09:48:08.508704 systemd-networkd[1407]: Enumeration completed Feb 9 09:48:08.508793 systemd[1]: Started systemd-networkd.service. Feb 9 09:48:08.509021 systemd-networkd[1407]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 09:48:08.509908 systemd-networkd[1407]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d7:77:67.network. Feb 9 09:48:08.510481 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 09:48:08.510586 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 09:48:08.510602 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 09:48:08.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:08.603236 kernel: intel_rapl_common: Found RAPL domain package Feb 9 09:48:08.603274 kernel: intel_rapl_common: Found RAPL domain core Feb 9 09:48:08.603288 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 09:48:08.603386 kernel: intel_rapl_common: Found RAPL domain uncore Feb 9 09:48:08.603400 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 09:48:08.736486 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 9 09:48:08.804485 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 09:48:08.825514 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 09:48:08.827705 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:48:08.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:08.837326 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:48:08.852050 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:48:08.885894 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:48:08.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:08.893700 systemd[1]: Reached target cryptsetup.target. Feb 9 09:48:08.902201 systemd[1]: Starting lvm2-activation.service... Feb 9 09:48:08.904301 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:48:08.938994 systemd[1]: Finished lvm2-activation.service. Feb 9 09:48:08.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:08.947745 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:48:08.955599 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:48:08.955614 systemd[1]: Reached target local-fs.target. Feb 9 09:48:08.963580 systemd[1]: Reached target machines.target. Feb 9 09:48:08.972207 systemd[1]: Starting ldconfig.service... Feb 9 09:48:08.978807 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:48:08.978847 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:48:08.979451 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:48:08.987116 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:48:08.997128 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:48:08.997235 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:48:08.997271 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:48:08.997824 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:48:08.998036 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1441 (bootctl) Feb 9 09:48:08.998666 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:48:09.017912 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:48:09.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:09.026023 systemd-tmpfiles[1445]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:48:09.035275 systemd-tmpfiles[1445]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:48:09.044825 systemd-tmpfiles[1445]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:48:09.149496 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 09:48:09.176519 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 9 09:48:09.178426 systemd-networkd[1407]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d7:77:66.network. Feb 9 09:48:09.262487 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 09:48:09.339486 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 09:48:09.364486 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 9 09:48:09.364513 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 09:48:09.384484 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 09:48:09.409905 systemd-networkd[1407]: bond0: Link UP Feb 9 09:48:09.410038 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:48:09.410154 systemd-networkd[1407]: enp2s0f1np1: Link UP Feb 9 09:48:09.410326 systemd-networkd[1407]: enp2s0f1np1: Gained carrier Feb 9 09:48:09.410409 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:48:09.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:09.411626 systemd-networkd[1407]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d7:77:66.network. Feb 9 09:48:09.436889 systemd-fsck[1450]: fsck.fat 4.2 (2021-01-31) Feb 9 09:48:09.436889 systemd-fsck[1450]: /dev/sda1: 789 files, 115332/258078 clusters Feb 9 09:48:09.441111 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:48:09.453709 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 09:48:09.453745 kernel: bond0: active interface up! Feb 9 09:48:09.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:09.475787 systemd[1]: Mounting boot.mount... Feb 9 09:48:09.476484 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 09:48:09.487016 systemd[1]: Mounted boot.mount. Feb 9 09:48:09.512484 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 09:48:09.514269 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:48:09.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:09.541122 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:48:09.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:09.550299 systemd[1]: Starting audit-rules.service... Feb 9 09:48:09.557158 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:48:09.566190 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:48:09.566000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:48:09.566000 audit[1475]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeedf22730 a2=420 a3=0 items=0 ppid=1458 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:09.566000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:48:09.567193 augenrules[1475]: No rules Feb 9 09:48:09.567634 systemd-networkd[1407]: enp2s0f0np0: Link UP Feb 9 09:48:09.567801 systemd-networkd[1407]: bond0: Gained carrier Feb 9 09:48:09.567889 systemd-networkd[1407]: enp2s0f0np0: Gained carrier Feb 9 09:48:09.575345 systemd[1]: Starting systemd-resolved.service... Feb 9 09:48:09.580803 systemd-networkd[1407]: enp2s0f1np1: Link DOWN Feb 9 09:48:09.580806 systemd-networkd[1407]: enp2s0f1np1: Lost carrier Feb 9 09:48:09.584318 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:48:09.605528 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:48:09.605563 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Feb 9 09:48:09.618052 ldconfig[1439]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:48:09.632185 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:48:09.638846 systemd[1]: Finished ldconfig.service. Feb 9 09:48:09.645711 systemd[1]: Finished audit-rules.service. Feb 9 09:48:09.652650 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:48:09.660652 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:48:09.672402 systemd[1]: Starting systemd-update-done.service... Feb 9 09:48:09.680547 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:48:09.680880 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:48:09.689649 systemd[1]: Finished systemd-update-done.service. Feb 9 09:48:09.703807 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:48:09.706136 systemd-resolved[1482]: Positive Trust Anchors: Feb 9 09:48:09.706144 systemd-resolved[1482]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:48:09.706174 systemd-resolved[1482]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:48:09.709800 systemd-resolved[1482]: Using system hostname 'ci-3510.3.2-a-c37e5c1643'. Feb 9 09:48:09.712598 systemd[1]: Reached target time-set.target. Feb 9 09:48:09.775523 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 09:48:09.796522 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Feb 9 09:48:09.797401 systemd-networkd[1407]: enp2s0f1np1: Link UP Feb 9 09:48:09.797573 systemd-networkd[1407]: enp2s0f1np1: Gained carrier Feb 9 09:48:09.798388 systemd[1]: Started systemd-resolved.service. Feb 9 09:48:09.806598 systemd[1]: Reached target network.target. Feb 9 09:48:09.814571 systemd[1]: Reached target nss-lookup.target. Feb 9 09:48:09.829578 systemd[1]: Reached target sysinit.target. Feb 9 09:48:09.836518 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Feb 9 09:48:09.852609 systemd[1]: Started motdgen.path. Feb 9 09:48:09.858526 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 09:48:09.864574 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:48:09.874625 systemd[1]: Started logrotate.timer. Feb 9 09:48:09.881596 systemd[1]: Started mdadm.timer. Feb 9 09:48:09.888558 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:48:09.896565 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:48:09.896580 systemd[1]: Reached target paths.target. Feb 9 09:48:09.903546 systemd[1]: Reached target timers.target. Feb 9 09:48:09.910683 systemd[1]: Listening on dbus.socket. Feb 9 09:48:09.918168 systemd[1]: Starting docker.socket... Feb 9 09:48:09.925224 systemd[1]: Listening on sshd.socket. Feb 9 09:48:09.931611 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:48:09.931803 systemd[1]: Listening on docker.socket. Feb 9 09:48:09.938575 systemd[1]: Reached target sockets.target. Feb 9 09:48:09.946565 systemd[1]: Reached target basic.target. Feb 9 09:48:09.953616 systemd[1]: System is tainted: cgroupsv1 Feb 9 09:48:09.953639 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:48:09.953652 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:48:09.954145 systemd[1]: Starting containerd.service... Feb 9 09:48:09.960978 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 09:48:09.970046 systemd[1]: Starting coreos-metadata.service... Feb 9 09:48:09.977124 systemd[1]: Starting dbus.service... Feb 9 09:48:09.983169 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:48:09.987825 jq[1504]: false Feb 9 09:48:09.989179 coreos-metadata[1497]: Feb 09 09:48:09.989 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:48:09.990232 systemd[1]: Starting extend-filesystems.service... Feb 9 09:48:09.995493 dbus-daemon[1503]: [system] SELinux support is enabled Feb 9 09:48:09.997606 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:48:09.998364 systemd[1]: Starting motdgen.service... Feb 9 09:48:09.998448 extend-filesystems[1507]: Found sda Feb 9 09:48:10.019666 extend-filesystems[1507]: Found sda1 Feb 9 09:48:10.019666 extend-filesystems[1507]: Found sda2 Feb 9 09:48:10.019666 extend-filesystems[1507]: Found sda3 Feb 9 09:48:10.019666 extend-filesystems[1507]: Found usr Feb 9 09:48:10.019666 extend-filesystems[1507]: Found sda4 Feb 9 09:48:10.019666 extend-filesystems[1507]: Found sda6 Feb 9 09:48:10.019666 extend-filesystems[1507]: Found sda7 Feb 9 09:48:10.019666 extend-filesystems[1507]: Found sda9 Feb 9 09:48:10.019666 extend-filesystems[1507]: Checking size of /dev/sda9 Feb 9 09:48:10.019666 extend-filesystems[1507]: Resized partition /dev/sda9 Feb 9 09:48:10.144601 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 9 09:48:10.006476 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:48:10.144679 coreos-metadata[1500]: Feb 09 09:48:09.999 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:48:10.144798 extend-filesystems[1522]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:48:10.030342 systemd[1]: Starting prepare-critools.service... Feb 9 09:48:10.054237 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:48:10.069156 systemd[1]: Starting sshd-keygen.service... Feb 9 09:48:10.083576 systemd[1]: Starting systemd-logind.service... Feb 9 09:48:10.096522 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:48:10.097197 systemd[1]: Starting tcsd.service... Feb 9 09:48:10.165059 update_engine[1540]: I0209 09:48:10.159991 1540 main.cc:92] Flatcar Update Engine starting Feb 9 09:48:10.165059 update_engine[1540]: I0209 09:48:10.163281 1540 update_check_scheduler.cc:74] Next update check in 4m51s Feb 9 09:48:10.107848 systemd-logind[1538]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 09:48:10.165314 jq[1541]: true Feb 9 09:48:10.107857 systemd-logind[1538]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 09:48:10.107866 systemd-logind[1538]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 09:48:10.108056 systemd-logind[1538]: New seat seat0. Feb 9 09:48:10.110134 systemd[1]: Starting update-engine.service... Feb 9 09:48:10.117150 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:48:10.136869 systemd[1]: Started dbus.service. Feb 9 09:48:10.158179 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:48:10.158321 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:48:10.158471 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:48:10.158591 systemd[1]: Finished motdgen.service. Feb 9 09:48:10.172590 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:48:10.172721 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:48:10.178344 tar[1545]: ./ Feb 9 09:48:10.178344 tar[1545]: ./macvlan Feb 9 09:48:10.183193 jq[1549]: true Feb 9 09:48:10.184032 tar[1546]: crictl Feb 9 09:48:10.184346 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:48:10.188153 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 09:48:10.188348 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 09:48:10.189358 systemd[1]: Started systemd-logind.service. Feb 9 09:48:10.192956 env[1550]: time="2024-02-09T09:48:10.192927604Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:48:10.199531 tar[1545]: ./static Feb 9 09:48:10.201928 env[1550]: time="2024-02-09T09:48:10.201882155Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:48:10.202032 systemd[1]: Started update-engine.service. Feb 9 09:48:10.202517 env[1550]: time="2024-02-09T09:48:10.202506481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:48:10.203160 env[1550]: time="2024-02-09T09:48:10.203117902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:48:10.203160 env[1550]: time="2024-02-09T09:48:10.203133228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:48:10.205202 env[1550]: time="2024-02-09T09:48:10.205184833Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:48:10.205245 env[1550]: time="2024-02-09T09:48:10.205202731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:48:10.205245 env[1550]: time="2024-02-09T09:48:10.205213974Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:48:10.205245 env[1550]: time="2024-02-09T09:48:10.205223859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:48:10.205294 env[1550]: time="2024-02-09T09:48:10.205279826Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:48:10.207610 env[1550]: time="2024-02-09T09:48:10.207550818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:48:10.207711 env[1550]: time="2024-02-09T09:48:10.207677245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:48:10.207711 env[1550]: time="2024-02-09T09:48:10.207692869Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:48:10.209874 env[1550]: time="2024-02-09T09:48:10.209831237Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:48:10.209874 env[1550]: time="2024-02-09T09:48:10.209846424Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:48:10.210040 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:48:10.210702 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:48:10.216262 env[1550]: time="2024-02-09T09:48:10.216213654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:48:10.216262 env[1550]: time="2024-02-09T09:48:10.216237552Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:48:10.216262 env[1550]: time="2024-02-09T09:48:10.216249870Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:48:10.216330 env[1550]: time="2024-02-09T09:48:10.216272471Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:48:10.216330 env[1550]: time="2024-02-09T09:48:10.216284649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:48:10.216330 env[1550]: time="2024-02-09T09:48:10.216296688Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:48:10.216330 env[1550]: time="2024-02-09T09:48:10.216308135Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:48:10.216330 env[1550]: time="2024-02-09T09:48:10.216322689Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:48:10.216419 env[1550]: time="2024-02-09T09:48:10.216334215Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:48:10.216419 env[1550]: time="2024-02-09T09:48:10.216345306Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:48:10.216419 env[1550]: time="2024-02-09T09:48:10.216355319Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:48:10.216419 env[1550]: time="2024-02-09T09:48:10.216364829Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:48:10.216483 env[1550]: time="2024-02-09T09:48:10.216429496Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:48:10.216504 env[1550]: time="2024-02-09T09:48:10.216494829Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:48:10.216728 env[1550]: time="2024-02-09T09:48:10.216713650Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:48:10.216776 env[1550]: time="2024-02-09T09:48:10.216739157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.216776 env[1550]: time="2024-02-09T09:48:10.216754084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:48:10.216979 env[1550]: time="2024-02-09T09:48:10.216958481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217019 env[1550]: time="2024-02-09T09:48:10.216983686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217062 env[1550]: time="2024-02-09T09:48:10.217043271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217094 env[1550]: time="2024-02-09T09:48:10.217072135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217178 env[1550]: time="2024-02-09T09:48:10.217092380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217178 env[1550]: time="2024-02-09T09:48:10.217160245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217178 env[1550]: time="2024-02-09T09:48:10.217171673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217232 env[1550]: time="2024-02-09T09:48:10.217179179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217232 env[1550]: time="2024-02-09T09:48:10.217188238Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:48:10.217267 env[1550]: time="2024-02-09T09:48:10.217255821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217267 env[1550]: time="2024-02-09T09:48:10.217265267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217300 env[1550]: time="2024-02-09T09:48:10.217272460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217300 env[1550]: time="2024-02-09T09:48:10.217281029Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:48:10.217300 env[1550]: time="2024-02-09T09:48:10.217289194Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:48:10.217300 env[1550]: time="2024-02-09T09:48:10.217296823Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:48:10.217358 env[1550]: time="2024-02-09T09:48:10.217309005Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:48:10.217358 env[1550]: time="2024-02-09T09:48:10.217331105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:48:10.217466 env[1550]: time="2024-02-09T09:48:10.217441637Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217472515Z" level=info msg="Connect containerd service" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217505410Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217758099Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217842654Z" level=info msg="Start subscribing containerd event" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217862483Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217874601Z" level=info msg="Start recovering state" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217883502Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217911351Z" level=info msg="containerd successfully booted in 0.025405s" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217915834Z" level=info msg="Start event monitor" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217931738Z" level=info msg="Start snapshots syncer" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217938881Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:48:10.220060 env[1550]: time="2024-02-09T09:48:10.217944383Z" level=info msg="Start streaming server" Feb 9 09:48:10.220696 systemd[1]: Started containerd.service. Feb 9 09:48:10.224789 tar[1545]: ./vlan Feb 9 09:48:10.229152 systemd[1]: Started locksmithd.service. Feb 9 09:48:10.235604 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:48:10.235685 systemd[1]: Reached target system-config.target. Feb 9 09:48:10.243604 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:48:10.243676 systemd[1]: Reached target user-config.target. Feb 9 09:48:10.245936 tar[1545]: ./portmap Feb 9 09:48:10.265948 tar[1545]: ./host-local Feb 9 09:48:10.283631 tar[1545]: ./vrf Feb 9 09:48:10.285541 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:48:10.302756 tar[1545]: ./bridge Feb 9 09:48:10.325637 tar[1545]: ./tuning Feb 9 09:48:10.343913 tar[1545]: ./firewall Feb 9 09:48:10.367576 tar[1545]: ./host-device Feb 9 09:48:10.388219 tar[1545]: ./sbr Feb 9 09:48:10.407036 tar[1545]: ./loopback Feb 9 09:48:10.424948 tar[1545]: ./dhcp Feb 9 09:48:10.451458 systemd[1]: Finished prepare-critools.service. Feb 9 09:48:10.476834 tar[1545]: ./ptp Feb 9 09:48:10.499017 tar[1545]: ./ipvlan Feb 9 09:48:10.507485 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 9 09:48:10.535747 extend-filesystems[1522]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 9 09:48:10.535747 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 09:48:10.535747 extend-filesystems[1522]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 9 09:48:10.572554 extend-filesystems[1507]: Resized filesystem in /dev/sda9 Feb 9 09:48:10.572554 extend-filesystems[1507]: Found sdb Feb 9 09:48:10.536283 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:48:10.587631 tar[1545]: ./bandwidth Feb 9 09:48:10.536413 systemd[1]: Finished extend-filesystems.service. Feb 9 09:48:10.574233 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:48:10.642384 sshd_keygen[1537]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:48:10.653905 systemd[1]: Finished sshd-keygen.service. Feb 9 09:48:10.661560 systemd[1]: Starting issuegen.service... Feb 9 09:48:10.668975 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:48:10.669076 systemd[1]: Finished issuegen.service. Feb 9 09:48:10.676601 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:48:10.685867 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:48:10.694331 systemd[1]: Started getty@tty1.service. Feb 9 09:48:10.702287 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 09:48:10.710770 systemd[1]: Reached target getty.target. Feb 9 09:48:11.191560 systemd-networkd[1407]: bond0: Gained IPv6LL Feb 9 09:48:12.203677 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 09:48:15.722082 login[1624]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:48:15.731247 systemd-logind[1538]: New session 1 of user core. Feb 9 09:48:15.731298 login[1623]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:48:15.731783 systemd[1]: Created slice user-500.slice. Feb 9 09:48:15.732382 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:48:15.734588 systemd-logind[1538]: New session 2 of user core. Feb 9 09:48:15.738092 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:48:15.738757 systemd[1]: Starting user@500.service... Feb 9 09:48:15.740876 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:15.806258 systemd[1630]: Queued start job for default target default.target. Feb 9 09:48:15.806357 systemd[1630]: Reached target paths.target. Feb 9 09:48:15.806368 systemd[1630]: Reached target sockets.target. Feb 9 09:48:15.806375 systemd[1630]: Reached target timers.target. Feb 9 09:48:15.806382 systemd[1630]: Reached target basic.target. Feb 9 09:48:15.806400 systemd[1630]: Reached target default.target. Feb 9 09:48:15.806413 systemd[1630]: Startup finished in 62ms. Feb 9 09:48:15.806473 systemd[1]: Started user@500.service. Feb 9 09:48:15.806983 systemd[1]: Started session-1.scope. Feb 9 09:48:15.807328 systemd[1]: Started session-2.scope. Feb 9 09:48:15.864578 coreos-metadata[1497]: Feb 09 09:48:15.864 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 09:48:15.865424 coreos-metadata[1500]: Feb 09 09:48:15.864 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 09:48:16.864834 coreos-metadata[1497]: Feb 09 09:48:16.864 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 09:48:16.865716 coreos-metadata[1500]: Feb 09 09:48:16.864 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 09:48:17.558487 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 9 09:48:17.565484 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 9 09:48:17.939013 coreos-metadata[1500]: Feb 09 09:48:17.938 INFO Fetch successful Feb 9 09:48:17.939847 coreos-metadata[1497]: Feb 09 09:48:17.938 INFO Fetch successful Feb 9 09:48:17.964819 systemd[1]: Finished coreos-metadata.service. Feb 9 09:48:17.965426 unknown[1497]: wrote ssh authorized keys file for user: core Feb 9 09:48:17.965931 systemd[1]: Started packet-phone-home.service. Feb 9 09:48:17.971367 curl[1657]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 09:48:17.971588 curl[1657]: Dload Upload Total Spent Left Speed Feb 9 09:48:17.976600 update-ssh-keys[1659]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:48:17.976794 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 09:48:17.976954 systemd[1]: Reached target multi-user.target. Feb 9 09:48:17.977666 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:48:17.981331 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:48:17.981437 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:48:17.981593 systemd[1]: Startup finished in 20.529s (kernel) + 14.640s (userspace) = 35.170s. Feb 9 09:48:18.162795 systemd[1]: Created slice system-sshd.slice. Feb 9 09:48:18.163406 systemd[1]: Started sshd@0-139.178.94.23:22-147.75.109.163:53376.service. Feb 9 09:48:18.184962 curl[1657]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 09:48:18.185544 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 09:48:18.203231 sshd[1664]: Accepted publickey for core from 147.75.109.163 port 53376 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:48:18.204274 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:18.207738 systemd-logind[1538]: New session 3 of user core. Feb 9 09:48:18.208439 systemd[1]: Started session-3.scope. Feb 9 09:48:18.261748 systemd[1]: Started sshd@1-139.178.94.23:22-147.75.109.163:53386.service. Feb 9 09:48:18.291087 sshd[1670]: Accepted publickey for core from 147.75.109.163 port 53386 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:48:18.294307 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:18.305383 systemd-logind[1538]: New session 4 of user core. Feb 9 09:48:18.308206 systemd[1]: Started session-4.scope. Feb 9 09:48:18.365875 systemd-timesyncd[1484]: Contacted time server 137.190.2.4:123 (0.flatcar.pool.ntp.org). Feb 9 09:48:18.366035 systemd-timesyncd[1484]: Initial clock synchronization to Fri 2024-02-09 09:48:18.083785 UTC. Feb 9 09:48:18.378387 sshd[1670]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:18.379629 systemd[1]: Started sshd@2-139.178.94.23:22-147.75.109.163:53398.service. Feb 9 09:48:18.380092 systemd[1]: sshd@1-139.178.94.23:22-147.75.109.163:53386.service: Deactivated successfully. Feb 9 09:48:18.380530 systemd-logind[1538]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:48:18.380551 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:48:18.381033 systemd-logind[1538]: Removed session 4. Feb 9 09:48:18.409937 sshd[1675]: Accepted publickey for core from 147.75.109.163 port 53398 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:48:18.413089 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:18.423181 systemd-logind[1538]: New session 5 of user core. Feb 9 09:48:18.425393 systemd[1]: Started session-5.scope. Feb 9 09:48:18.492651 sshd[1675]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:18.494081 systemd[1]: Started sshd@3-139.178.94.23:22-147.75.109.163:53400.service. Feb 9 09:48:18.494321 systemd[1]: sshd@2-139.178.94.23:22-147.75.109.163:53398.service: Deactivated successfully. Feb 9 09:48:18.494808 systemd-logind[1538]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:48:18.494856 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:48:18.495352 systemd-logind[1538]: Removed session 5. Feb 9 09:48:18.523972 sshd[1683]: Accepted publickey for core from 147.75.109.163 port 53400 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:48:18.525003 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:18.528452 systemd-logind[1538]: New session 6 of user core. Feb 9 09:48:18.529246 systemd[1]: Started session-6.scope. Feb 9 09:48:18.596756 sshd[1683]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:18.602553 systemd[1]: Started sshd@4-139.178.94.23:22-147.75.109.163:53408.service. Feb 9 09:48:18.604032 systemd[1]: sshd@3-139.178.94.23:22-147.75.109.163:53400.service: Deactivated successfully. Feb 9 09:48:18.606393 systemd-logind[1538]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:48:18.606586 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:48:18.609083 systemd-logind[1538]: Removed session 6. Feb 9 09:48:18.664506 sshd[1689]: Accepted publickey for core from 147.75.109.163 port 53408 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:48:18.666739 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:18.674250 systemd-logind[1538]: New session 7 of user core. Feb 9 09:48:18.675785 systemd[1]: Started session-7.scope. Feb 9 09:48:18.766198 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 09:48:18.766825 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:48:18.788955 dbus-daemon[1503]: нͩ\xdbU: received setenforce notice (enforcing=-1626512144) Feb 9 09:48:18.794024 sudo[1695]: pam_unix(sudo:session): session closed for user root Feb 9 09:48:18.799514 sshd[1689]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:18.805652 systemd[1]: Started sshd@5-139.178.94.23:22-147.75.109.163:53412.service. Feb 9 09:48:18.807178 systemd[1]: sshd@4-139.178.94.23:22-147.75.109.163:53408.service: Deactivated successfully. Feb 9 09:48:18.809633 systemd-logind[1538]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:48:18.809770 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:48:18.812180 systemd-logind[1538]: Removed session 7. Feb 9 09:48:18.840220 sshd[1697]: Accepted publickey for core from 147.75.109.163 port 53412 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:48:18.840943 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:18.843487 systemd-logind[1538]: New session 8 of user core. Feb 9 09:48:18.843931 systemd[1]: Started session-8.scope. Feb 9 09:48:18.900567 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 09:48:18.901157 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:48:18.908684 sudo[1704]: pam_unix(sudo:session): session closed for user root Feb 9 09:48:18.920819 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 09:48:18.921407 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:48:18.946240 systemd[1]: Stopping audit-rules.service... Feb 9 09:48:18.948000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 09:48:18.949802 auditctl[1707]: No rules Feb 9 09:48:18.950586 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 09:48:18.951151 systemd[1]: Stopped audit-rules.service. Feb 9 09:48:18.955193 kernel: kauditd_printk_skb: 52 callbacks suppressed Feb 9 09:48:18.955363 kernel: audit: type=1305 audit(1707472098.948:133): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 09:48:18.955263 systemd[1]: Starting audit-rules.service... Feb 9 09:48:18.948000 audit[1707]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5fc67130 a2=420 a3=0 items=0 ppid=1 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:18.993514 augenrules[1725]: No rules Feb 9 09:48:18.994161 systemd[1]: Finished audit-rules.service. Feb 9 09:48:18.994995 sudo[1703]: pam_unix(sudo:session): session closed for user root Feb 9 09:48:18.996323 sshd[1697]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:18.998720 systemd[1]: Started sshd@6-139.178.94.23:22-147.75.109.163:53424.service. Feb 9 09:48:18.999328 systemd[1]: sshd@5-139.178.94.23:22-147.75.109.163:53412.service: Deactivated successfully. Feb 9 09:48:19.000208 systemd-logind[1538]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:48:19.000222 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:48:19.001066 systemd-logind[1538]: Removed session 8. Feb 9 09:48:19.002300 kernel: audit: type=1300 audit(1707472098.948:133): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc5fc67130 a2=420 a3=0 items=0 ppid=1 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:19.002386 kernel: audit: type=1327 audit(1707472098.948:133): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 09:48:18.948000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 09:48:18.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:19.033179 kernel: audit: type=1131 audit(1707472098.950:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:19.033216 kernel: audit: type=1130 audit(1707472098.993:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:18.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:18.994000 audit[1703]: USER_END pid=1703 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:19.056176 sshd[1730]: Accepted publickey for core from 147.75.109.163 port 53424 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:48:19.057704 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:48:19.059414 systemd-logind[1538]: New session 9 of user core. Feb 9 09:48:19.059947 systemd[1]: Started session-9.scope. Feb 9 09:48:19.079897 kernel: audit: type=1106 audit(1707472098.994:136): pid=1703 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:19.079940 kernel: audit: type=1104 audit(1707472098.994:137): pid=1703 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:18.994000 audit[1703]: CRED_DISP pid=1703 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:18.997000 audit[1697]: USER_END pid=1697 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:19.105018 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:48:19.105121 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:48:19.133686 kernel: audit: type=1106 audit(1707472098.997:138): pid=1697 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:19.133732 kernel: audit: type=1104 audit(1707472098.997:139): pid=1697 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:18.997000 audit[1697]: CRED_DISP pid=1697 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:18.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.94.23:22-147.75.109.163:53424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:19.183280 kernel: audit: type=1130 audit(1707472098.998:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.94.23:22-147.75.109.163:53424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:18.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.94.23:22-147.75.109.163:53412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:19.055000 audit[1730]: USER_ACCT pid=1730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:19.057000 audit[1730]: CRED_ACQ pid=1730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:19.057000 audit[1730]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc786c9f00 a2=3 a3=0 items=0 ppid=1 pid=1730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:19.057000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:48:19.061000 audit[1730]: USER_START pid=1730 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:19.061000 audit[1735]: CRED_ACQ pid=1735 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:19.103000 audit[1736]: USER_ACCT pid=1736 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:19.103000 audit[1736]: CRED_REFR pid=1736 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:19.104000 audit[1736]: USER_START pid=1736 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:23.443762 systemd[1]: Reloading. Feb 9 09:48:23.470949 /usr/lib/systemd/system-generators/torcx-generator[1766]: time="2024-02-09T09:48:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:48:23.470966 /usr/lib/systemd/system-generators/torcx-generator[1766]: time="2024-02-09T09:48:23Z" level=info msg="torcx already run" Feb 9 09:48:23.534407 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:48:23.534421 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:48:23.549525 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:48:23.598442 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:48:23.602166 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:48:23.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:23.602514 systemd[1]: Reached target network-online.target. Feb 9 09:48:23.603267 systemd[1]: Started kubelet.service. Feb 9 09:48:23.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:23.626666 kubelet[1832]: E0209 09:48:23.626620 1832 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:48:23.627862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:48:23.627953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:48:23.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:48:23.954889 systemd[1]: Stopped kubelet.service. Feb 9 09:48:23.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:23.960804 kernel: kauditd_printk_skb: 14 callbacks suppressed Feb 9 09:48:23.960915 kernel: audit: type=1130 audit(1707472103.954:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:23.980107 systemd[1]: Reloading. Feb 9 09:48:23.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:24.024257 /usr/lib/systemd/system-generators/torcx-generator[1936]: time="2024-02-09T09:48:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:48:24.024274 /usr/lib/systemd/system-generators/torcx-generator[1936]: time="2024-02-09T09:48:24Z" level=info msg="torcx already run" Feb 9 09:48:24.053451 kernel: audit: type=1131 audit(1707472103.954:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:24.076937 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:48:24.076944 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:48:24.088934 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:48:24.139115 systemd[1]: Started kubelet.service. Feb 9 09:48:24.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:24.160930 kubelet[2000]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:48:24.160930 kubelet[2000]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:48:24.161322 kubelet[2000]: I0209 09:48:24.160947 2000 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:48:24.161660 kubelet[2000]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:48:24.161660 kubelet[2000]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:48:24.194484 kernel: audit: type=1130 audit(1707472104.137:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:24.356784 kubelet[2000]: I0209 09:48:24.356728 2000 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:48:24.356784 kubelet[2000]: I0209 09:48:24.356737 2000 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:48:24.356891 kubelet[2000]: I0209 09:48:24.356852 2000 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:48:24.357942 kubelet[2000]: I0209 09:48:24.357902 2000 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:48:24.376918 kubelet[2000]: I0209 09:48:24.376879 2000 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:48:24.377073 kubelet[2000]: I0209 09:48:24.377042 2000 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:48:24.377096 kubelet[2000]: I0209 09:48:24.377078 2000 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:48:24.377096 kubelet[2000]: I0209 09:48:24.377089 2000 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:48:24.377096 kubelet[2000]: I0209 09:48:24.377095 2000 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:48:24.377194 kubelet[2000]: I0209 09:48:24.377148 2000 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:48:24.378496 kubelet[2000]: I0209 09:48:24.378489 2000 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:48:24.378496 kubelet[2000]: I0209 09:48:24.378498 2000 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:48:24.378550 kubelet[2000]: I0209 09:48:24.378509 2000 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:48:24.378550 kubelet[2000]: I0209 09:48:24.378517 2000 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:48:24.378612 kubelet[2000]: E0209 09:48:24.378605 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:24.378632 kubelet[2000]: E0209 09:48:24.378613 2000 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:24.378875 kubelet[2000]: I0209 09:48:24.378866 2000 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:48:24.379079 kubelet[2000]: W0209 09:48:24.379049 2000 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:48:24.379283 kubelet[2000]: I0209 09:48:24.379278 2000 server.go:1186] "Started kubelet" Feb 9 09:48:24.379331 kubelet[2000]: I0209 09:48:24.379326 2000 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:48:24.379451 kubelet[2000]: E0209 09:48:24.379441 2000 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:48:24.379488 kubelet[2000]: E0209 09:48:24.379456 2000 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:48:24.380117 kubelet[2000]: I0209 09:48:24.380107 2000 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:48:24.380188 kubelet[2000]: I0209 09:48:24.380173 2000 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 09:48:24.380288 kubelet[2000]: I0209 09:48:24.380279 2000 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 09:48:24.379000 audit[2000]: AVC avc: denied { mac_admin } for pid=2000 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:24.380429 kubelet[2000]: I0209 09:48:24.380369 2000 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:48:24.380465 kubelet[2000]: I0209 09:48:24.380429 2000 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:48:24.380496 kubelet[2000]: I0209 09:48:24.380470 2000 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:48:24.386813 kubelet[2000]: E0209 09:48:24.386765 2000 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.67.80.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:48:24.386932 kubelet[2000]: W0209 09:48:24.386901 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:48:24.386932 kubelet[2000]: E0209 09:48:24.386918 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:48:24.379000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:48:24.437695 kubelet[2000]: W0209 09:48:24.437646 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:48:24.437695 kubelet[2000]: E0209 09:48:24.437672 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:48:24.437695 kubelet[2000]: E0209 09:48:24.437626 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d5156a3b1d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 379267869, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 379267869, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.438267 kubelet[2000]: W0209 09:48:24.438229 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:48:24.438267 kubelet[2000]: E0209 09:48:24.438241 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:48:24.439443 kubelet[2000]: E0209 09:48:24.439404 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d5156d00a6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 379449510, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 379449510, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.450374 kubelet[2000]: I0209 09:48:24.450365 2000 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:48:24.450374 kubelet[2000]: I0209 09:48:24.450374 2000 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:48:24.450427 kubelet[2000]: I0209 09:48:24.450383 2000 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:48:24.451242 kubelet[2000]: I0209 09:48:24.451237 2000 policy_none.go:49] "None policy: Start" Feb 9 09:48:24.451461 kubelet[2000]: I0209 09:48:24.451454 2000 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:48:24.451498 kubelet[2000]: I0209 09:48:24.451465 2000 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:48:24.451498 kubelet[2000]: E0209 09:48:24.451463 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a22b8d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450042765, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450042765, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.452727 kubelet[2000]: E0209 09:48:24.452704 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23781", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450045825, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450045825, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.453926 kubelet[2000]: E0209 09:48:24.453874 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23ee9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450047721, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450047721, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.467166 kernel: audit: type=1400 audit(1707472104.379:156): avc: denied { mac_admin } for pid=2000 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:24.467205 kernel: audit: type=1401 audit(1707472104.379:156): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:48:24.467217 kernel: audit: type=1300 audit(1707472104.379:156): arch=c000003e syscall=188 success=no exit=-22 a0=c00134a330 a1=c00011be60 a2=c00134a300 a3=25 items=0 ppid=1 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.379000 audit[2000]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00134a330 a1=c00011be60 a2=c00134a300 a3=25 items=0 ppid=1 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.481594 kubelet[2000]: I0209 09:48:24.481539 2000 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 9 09:48:24.483241 kubelet[2000]: E0209 09:48:24.483197 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a22b8d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450042765, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 481523524, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a22b8d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.483387 kubelet[2000]: E0209 09:48:24.483379 2000 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.11" Feb 9 09:48:24.484642 kubelet[2000]: E0209 09:48:24.484585 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23781", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450045825, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 481525813, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23781" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.485902 kubelet[2000]: E0209 09:48:24.485874 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23ee9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450047721, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 481527163, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23ee9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.554296 kernel: audit: type=1327 audit(1707472104.379:156): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:48:24.379000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:48:24.588921 kubelet[2000]: E0209 09:48:24.588886 2000 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.67.80.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:48:24.641602 kernel: audit: type=1400 audit(1707472104.379:157): avc: denied { mac_admin } for pid=2000 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:24.379000 audit[2000]: AVC avc: denied { mac_admin } for pid=2000 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:24.642839 kubelet[2000]: I0209 09:48:24.642832 2000 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:48:24.642872 kubelet[2000]: I0209 09:48:24.642860 2000 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 09:48:24.642944 kubelet[2000]: I0209 09:48:24.642936 2000 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:48:24.643137 kubelet[2000]: E0209 09:48:24.643129 2000 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.11\" not found" Feb 9 09:48:24.643904 kubelet[2000]: E0209 09:48:24.643875 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d52522d1b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 643023288, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 643023288, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.684533 kubelet[2000]: I0209 09:48:24.684515 2000 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 9 09:48:24.686383 kubelet[2000]: E0209 09:48:24.686348 2000 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.11" Feb 9 09:48:24.686383 kubelet[2000]: E0209 09:48:24.686364 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a22b8d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450042765, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 684499622, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a22b8d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.687679 kubelet[2000]: E0209 09:48:24.687626 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23781", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450045825, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 684501863, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23781" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.703183 kernel: audit: type=1401 audit(1707472104.379:157): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:48:24.379000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:48:24.734682 kernel: audit: type=1300 audit(1707472104.379:157): arch=c000003e syscall=188 success=no exit=-22 a0=c00148a900 a1=c00011be78 a2=c00134a3c0 a3=25 items=0 ppid=1 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.379000 audit[2000]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00148a900 a1=c00011be78 a2=c00134a3c0 a3=25 items=0 ppid=1 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.827330 kubelet[2000]: E0209 09:48:24.780804 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23ee9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450047721, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 684503088, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23ee9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:24.379000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:48:24.642000 audit[2000]: AVC avc: denied { mac_admin } for pid=2000 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:48:24.642000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:48:24.642000 audit[2000]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0012ad8f0 a1=c001213e60 a2=c0012ad8c0 a3=25 items=0 ppid=1 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.642000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:48:24.826000 audit[2026]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.826000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe7170ffb0 a2=0 a3=7ffe7170ff9c items=0 ppid=2000 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 09:48:24.826000 audit[2028]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.826000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffdcd0d57f0 a2=0 a3=7ffdcd0d57dc items=0 ppid=2000 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 09:48:24.827000 audit[2030]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.827000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd9b126cd0 a2=0 a3=7ffd9b126cbc items=0 ppid=2000 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.827000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 09:48:24.860000 audit[2035]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.860000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcc9d1bee0 a2=0 a3=7ffcc9d1becc items=0 ppid=2000 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 09:48:24.886000 audit[2040]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.886000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe497077a0 a2=0 a3=7ffe4970778c items=0 ppid=2000 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 09:48:24.887000 audit[2041]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.887000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd985b92a0 a2=0 a3=7ffd985b928c items=0 ppid=2000 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 09:48:24.890000 audit[2044]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.890000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc47d239f0 a2=0 a3=7ffc47d239dc items=0 ppid=2000 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 09:48:24.892000 audit[2047]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.892000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc2b2bb190 a2=0 a3=7ffc2b2bb17c items=0 ppid=2000 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 09:48:24.892000 audit[2048]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.892000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3d68dba0 a2=0 a3=7ffc3d68db8c items=0 ppid=2000 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 09:48:24.893000 audit[2049]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.893000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff52280970 a2=0 a3=7fff5228095c items=0 ppid=2000 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 09:48:24.894000 audit[2051]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.894000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe112e6670 a2=0 a3=7ffe112e665c items=0 ppid=2000 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 09:48:24.895000 audit[2053]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.895000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff2faf0440 a2=0 a3=7fff2faf042c items=0 ppid=2000 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 09:48:24.928000 audit[2056]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.928000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc9b6fa690 a2=0 a3=7ffc9b6fa67c items=0 ppid=2000 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.928000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 09:48:24.928000 audit[2058]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.928000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffdf6a33bf0 a2=0 a3=7ffdf6a33bdc items=0 ppid=2000 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.928000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 09:48:24.934000 audit[2061]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.934000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffc79b55070 a2=0 a3=7ffc79b5505c items=0 ppid=2000 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 09:48:24.936133 kubelet[2000]: I0209 09:48:24.936080 2000 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:48:24.935000 audit[2062]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.935000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff63048610 a2=0 a3=7fff630485fc items=0 ppid=2000 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 09:48:24.935000 audit[2063]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.935000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff37b4f0f0 a2=0 a3=7fff37b4f0dc items=0 ppid=2000 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.935000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 09:48:24.935000 audit[2064]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.935000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc582eb2f0 a2=0 a3=7ffc582eb2dc items=0 ppid=2000 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 09:48:24.935000 audit[2065]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.935000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefaf85370 a2=0 a3=7ffefaf8535c items=0 ppid=2000 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.935000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 09:48:24.936000 audit[2067]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:24.936000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8097e510 a2=0 a3=7ffd8097e4fc items=0 ppid=2000 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 09:48:24.937000 audit[2068]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.937000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe8b9f8e10 a2=0 a3=7ffe8b9f8dfc items=0 ppid=2000 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 09:48:24.937000 audit[2069]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.937000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc0c102570 a2=0 a3=7ffc0c10255c items=0 ppid=2000 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 09:48:24.939000 audit[2071]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.939000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fffd6c3c930 a2=0 a3=7fffd6c3c91c items=0 ppid=2000 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 09:48:24.939000 audit[2072]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.939000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed7047740 a2=0 a3=7ffed704772c items=0 ppid=2000 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 09:48:24.940000 audit[2073]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.940000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2bf808b0 a2=0 a3=7fff2bf8089c items=0 ppid=2000 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 09:48:24.941000 audit[2075]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.941000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc05dbcb10 a2=0 a3=7ffc05dbcafc items=0 ppid=2000 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 09:48:24.943000 audit[2077]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.943000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd8367ccb0 a2=0 a3=7ffd8367cc9c items=0 ppid=2000 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 09:48:24.944000 audit[2079]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.944000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fffd9b7e2f0 a2=0 a3=7fffd9b7e2dc items=0 ppid=2000 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 09:48:24.946000 audit[2081]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.946000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fff320262c0 a2=0 a3=7fff320262ac items=0 ppid=2000 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 09:48:24.948000 audit[2083]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.948000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffeb8fc8aa0 a2=0 a3=7ffeb8fc8a8c items=0 ppid=2000 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 09:48:24.950002 kubelet[2000]: I0209 09:48:24.949952 2000 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:48:24.950002 kubelet[2000]: I0209 09:48:24.949966 2000 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:48:24.950002 kubelet[2000]: I0209 09:48:24.949978 2000 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:48:24.950073 kubelet[2000]: E0209 09:48:24.950005 2000 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:48:24.949000 audit[2084]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.949000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbf083020 a2=0 a3=7ffcbf08300c items=0 ppid=2000 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 09:48:24.949000 audit[2085]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.949000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfac0bb20 a2=0 a3=7ffdfac0bb0c items=0 ppid=2000 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 09:48:24.951563 kubelet[2000]: W0209 09:48:24.951553 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:48:24.951608 kubelet[2000]: E0209 09:48:24.951569 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:48:24.950000 audit[2086]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:24.950000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe22cdc9e0 a2=0 a3=7ffe22cdc9cc items=0 ppid=2000 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:24.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 09:48:24.991814 kubelet[2000]: E0209 09:48:24.991724 2000 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.67.80.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:48:25.088252 kubelet[2000]: I0209 09:48:25.088159 2000 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 9 09:48:25.090775 kubelet[2000]: E0209 09:48:25.090704 2000 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.11" Feb 9 09:48:25.090775 kubelet[2000]: E0209 09:48:25.090659 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a22b8d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450042765, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 25, 88053663, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a22b8d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:25.182935 kubelet[2000]: E0209 09:48:25.182612 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23781", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450045825, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 25, 88073902, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23781" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:25.378899 kubelet[2000]: E0209 09:48:25.378793 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:25.382783 kubelet[2000]: E0209 09:48:25.382566 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23ee9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450047721, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 25, 88084378, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23ee9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:25.586027 kubelet[2000]: W0209 09:48:25.585820 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:48:25.586027 kubelet[2000]: E0209 09:48:25.585888 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:48:25.680079 kubelet[2000]: W0209 09:48:25.680010 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:48:25.680079 kubelet[2000]: E0209 09:48:25.680088 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:48:25.795145 kubelet[2000]: E0209 09:48:25.795045 2000 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.67.80.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:48:25.892715 kubelet[2000]: I0209 09:48:25.892501 2000 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 9 09:48:25.894736 kubelet[2000]: E0209 09:48:25.894652 2000 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.11" Feb 9 09:48:25.894920 kubelet[2000]: E0209 09:48:25.894648 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a22b8d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450042765, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 25, 892400389, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a22b8d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:25.896423 kubelet[2000]: W0209 09:48:25.896383 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:48:25.896738 kubelet[2000]: E0209 09:48:25.896440 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:48:25.896861 kubelet[2000]: E0209 09:48:25.896686 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23781", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450045825, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 25, 892417570, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23781" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:25.981470 kubelet[2000]: E0209 09:48:25.981328 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23ee9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450047721, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 25, 892424435, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23ee9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:26.379511 kubelet[2000]: E0209 09:48:26.379279 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:26.465638 kubelet[2000]: W0209 09:48:26.465533 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:48:26.465638 kubelet[2000]: E0209 09:48:26.465604 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:48:27.380176 kubelet[2000]: E0209 09:48:27.380071 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:27.397826 kubelet[2000]: E0209 09:48:27.397726 2000 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.67.80.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:48:27.406747 kubelet[2000]: W0209 09:48:27.406654 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:48:27.406747 kubelet[2000]: E0209 09:48:27.406717 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:48:27.496426 kubelet[2000]: I0209 09:48:27.496369 2000 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 9 09:48:27.498953 kubelet[2000]: E0209 09:48:27.498870 2000 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.11" Feb 9 09:48:27.499175 kubelet[2000]: E0209 09:48:27.498849 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a22b8d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450042765, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 27, 496265206, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a22b8d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:27.501138 kubelet[2000]: E0209 09:48:27.500957 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23781", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450045825, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 27, 496286875, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23781" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:27.503685 kubelet[2000]: E0209 09:48:27.503472 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23ee9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450047721, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 27, 496297592, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23ee9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:27.903921 kubelet[2000]: W0209 09:48:27.903836 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:48:27.904259 kubelet[2000]: E0209 09:48:27.903942 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:48:27.980047 kubelet[2000]: W0209 09:48:27.979943 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:48:27.980047 kubelet[2000]: E0209 09:48:27.980014 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:48:28.381548 kubelet[2000]: E0209 09:48:28.381306 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:29.381908 kubelet[2000]: E0209 09:48:29.381803 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:29.522526 kubelet[2000]: W0209 09:48:29.522394 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:48:29.522526 kubelet[2000]: E0209 09:48:29.522462 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:48:30.382893 kubelet[2000]: E0209 09:48:30.382784 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:30.600345 kubelet[2000]: E0209 09:48:30.600244 2000 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.67.80.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:48:30.701057 kubelet[2000]: I0209 09:48:30.700888 2000 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 9 09:48:30.702523 kubelet[2000]: E0209 09:48:30.702309 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a22b8d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.67.80.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450042765, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 30, 700801688, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a22b8d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:30.702523 kubelet[2000]: E0209 09:48:30.702448 2000 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.67.80.11" Feb 9 09:48:30.704569 kubelet[2000]: E0209 09:48:30.704385 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23781", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.67.80.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450045825, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 30, 700820709, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23781" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:30.706561 kubelet[2000]: E0209 09:48:30.706372 2000 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11.17b228d519a23ee9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.67.80.11", UID:"10.67.80.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.67.80.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.67.80.11"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 48, 24, 450047721, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 48, 30, 700828941, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.67.80.11.17b228d519a23ee9" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:48:31.384042 kubelet[2000]: E0209 09:48:31.383972 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:31.681116 kubelet[2000]: W0209 09:48:31.680904 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:48:31.681116 kubelet[2000]: E0209 09:48:31.680975 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:48:32.011989 kubelet[2000]: W0209 09:48:32.011773 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:48:32.011989 kubelet[2000]: E0209 09:48:32.011848 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.80.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:48:32.253431 kubelet[2000]: W0209 09:48:32.253328 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:48:32.253431 kubelet[2000]: E0209 09:48:32.253400 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:48:32.384860 kubelet[2000]: E0209 09:48:32.384649 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:33.181746 kubelet[2000]: W0209 09:48:33.181643 2000 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:48:33.181746 kubelet[2000]: E0209 09:48:33.181707 2000 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:48:33.385379 kubelet[2000]: E0209 09:48:33.385276 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:34.358508 kubelet[2000]: I0209 09:48:34.358369 2000 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 09:48:34.385534 kubelet[2000]: E0209 09:48:34.385447 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:34.644510 kubelet[2000]: E0209 09:48:34.644247 2000 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.11\" not found" Feb 9 09:48:34.753007 kubelet[2000]: E0209 09:48:34.752905 2000 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.80.11" not found Feb 9 09:48:35.386433 kubelet[2000]: E0209 09:48:35.386333 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:35.796518 kubelet[2000]: E0209 09:48:35.796290 2000 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.67.80.11" not found Feb 9 09:48:36.387161 kubelet[2000]: E0209 09:48:36.387103 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:37.010721 kubelet[2000]: E0209 09:48:37.010621 2000 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.11\" not found" node="10.67.80.11" Feb 9 09:48:37.103860 kubelet[2000]: I0209 09:48:37.103773 2000 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 9 09:48:37.196272 kubelet[2000]: I0209 09:48:37.196170 2000 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.11" Feb 9 09:48:37.207093 kubelet[2000]: E0209 09:48:37.207015 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:37.307572 kubelet[2000]: E0209 09:48:37.307333 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:37.388440 kubelet[2000]: E0209 09:48:37.388353 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:37.408474 kubelet[2000]: E0209 09:48:37.408367 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:37.508658 kubelet[2000]: E0209 09:48:37.508550 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:37.609544 kubelet[2000]: E0209 09:48:37.609309 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:37.660859 sudo[1736]: pam_unix(sudo:session): session closed for user root Feb 9 09:48:37.660000 audit[1736]: USER_END pid=1736 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:37.663748 sshd[1730]: pam_unix(sshd:session): session closed for user core Feb 9 09:48:37.667390 systemd[1]: sshd@6-139.178.94.23:22-147.75.109.163:53424.service: Deactivated successfully. Feb 9 09:48:37.668235 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:48:37.668274 systemd-logind[1538]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:48:37.668827 systemd-logind[1538]: Removed session 9. Feb 9 09:48:37.687746 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 09:48:37.687785 kernel: audit: type=1106 audit(1707472117.660:192): pid=1736 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:37.709980 kubelet[2000]: E0209 09:48:37.709939 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:37.660000 audit[1736]: CRED_DISP pid=1736 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:37.778560 kernel: audit: type=1104 audit(1707472117.660:193): pid=1736 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:48:37.810277 kubelet[2000]: E0209 09:48:37.810240 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:37.665000 audit[1730]: USER_END pid=1730 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:37.911054 kubelet[2000]: E0209 09:48:37.911019 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:37.959897 kernel: audit: type=1106 audit(1707472117.665:194): pid=1730 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:37.959925 kernel: audit: type=1104 audit(1707472117.665:195): pid=1730 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:37.665000 audit[1730]: CRED_DISP pid=1730 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 09:48:38.012011 kubelet[2000]: E0209 09:48:38.011975 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:38.048132 kernel: audit: type=1131 audit(1707472117.667:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.94.23:22-147.75.109.163:53424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:37.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.94.23:22-147.75.109.163:53424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:48:38.112823 kubelet[2000]: E0209 09:48:38.112787 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:38.214048 kubelet[2000]: E0209 09:48:38.213830 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:38.315108 kubelet[2000]: E0209 09:48:38.314999 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:38.389006 kubelet[2000]: E0209 09:48:38.388898 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:38.415463 kubelet[2000]: E0209 09:48:38.415356 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:38.516461 kubelet[2000]: E0209 09:48:38.516242 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:38.617092 kubelet[2000]: E0209 09:48:38.616984 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:38.717856 kubelet[2000]: E0209 09:48:38.717749 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:38.818967 kubelet[2000]: E0209 09:48:38.818748 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:38.919130 kubelet[2000]: E0209 09:48:38.919023 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.020401 kubelet[2000]: E0209 09:48:39.020282 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.121626 kubelet[2000]: E0209 09:48:39.121406 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.222264 kubelet[2000]: E0209 09:48:39.222158 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.323523 kubelet[2000]: E0209 09:48:39.323401 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.389459 kubelet[2000]: E0209 09:48:39.389241 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:39.424728 kubelet[2000]: E0209 09:48:39.424621 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.525894 kubelet[2000]: E0209 09:48:39.525777 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.626183 kubelet[2000]: E0209 09:48:39.626078 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.727546 kubelet[2000]: E0209 09:48:39.727264 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.828466 kubelet[2000]: E0209 09:48:39.828357 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:39.929319 kubelet[2000]: E0209 09:48:39.929210 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.030276 kubelet[2000]: E0209 09:48:40.030052 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.130366 kubelet[2000]: E0209 09:48:40.130249 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.231474 kubelet[2000]: E0209 09:48:40.231366 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.332448 kubelet[2000]: E0209 09:48:40.332231 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.389693 kubelet[2000]: E0209 09:48:40.389590 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:40.432503 kubelet[2000]: E0209 09:48:40.432358 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.533445 kubelet[2000]: E0209 09:48:40.533339 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.633704 kubelet[2000]: E0209 09:48:40.633512 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.733871 kubelet[2000]: E0209 09:48:40.733799 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.834055 kubelet[2000]: E0209 09:48:40.833957 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:40.934672 kubelet[2000]: E0209 09:48:40.934495 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:41.035350 kubelet[2000]: E0209 09:48:41.035226 2000 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.11\" not found" Feb 9 09:48:41.137418 kubelet[2000]: I0209 09:48:41.137321 2000 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 09:48:41.138137 env[1550]: time="2024-02-09T09:48:41.138012077Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:48:41.138989 kubelet[2000]: I0209 09:48:41.138463 2000 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 09:48:41.388635 kubelet[2000]: I0209 09:48:41.388394 2000 apiserver.go:52] "Watching apiserver" Feb 9 09:48:41.390923 kubelet[2000]: E0209 09:48:41.390815 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:41.394007 kubelet[2000]: I0209 09:48:41.393910 2000 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:41.394192 kubelet[2000]: I0209 09:48:41.394088 2000 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:41.394334 kubelet[2000]: I0209 09:48:41.394198 2000 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:41.394629 kubelet[2000]: E0209 09:48:41.394538 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:41.482716 kubelet[2000]: I0209 09:48:41.482618 2000 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:48:41.499283 kubelet[2000]: I0209 09:48:41.499174 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20cd0fa7-3de4-493f-b032-37a3b24fce24-kube-proxy\") pod \"kube-proxy-hhv6c\" (UID: \"20cd0fa7-3de4-493f-b032-37a3b24fce24\") " pod="kube-system/kube-proxy-hhv6c" Feb 9 09:48:41.499283 kubelet[2000]: I0209 09:48:41.499285 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb674a8-89e1-469a-9a8e-76ca472e529c-lib-modules\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.499746 kubelet[2000]: I0209 09:48:41.499475 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ffb674a8-89e1-469a-9a8e-76ca472e529c-policysync\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.499746 kubelet[2000]: I0209 09:48:41.499655 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ffb674a8-89e1-469a-9a8e-76ca472e529c-node-certs\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.499962 kubelet[2000]: I0209 09:48:41.499813 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mld28\" (UniqueName: \"kubernetes.io/projected/ffb674a8-89e1-469a-9a8e-76ca472e529c-kube-api-access-mld28\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.499962 kubelet[2000]: I0209 09:48:41.499888 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8-registration-dir\") pod \"csi-node-driver-s5z52\" (UID: \"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8\") " pod="calico-system/csi-node-driver-s5z52" Feb 9 09:48:41.500181 kubelet[2000]: I0209 09:48:41.500021 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20cd0fa7-3de4-493f-b032-37a3b24fce24-lib-modules\") pod \"kube-proxy-hhv6c\" (UID: \"20cd0fa7-3de4-493f-b032-37a3b24fce24\") " pod="kube-system/kube-proxy-hhv6c" Feb 9 09:48:41.500181 kubelet[2000]: I0209 09:48:41.500094 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffb674a8-89e1-469a-9a8e-76ca472e529c-xtables-lock\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.500181 kubelet[2000]: I0209 09:48:41.500155 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ffb674a8-89e1-469a-9a8e-76ca472e529c-var-run-calico\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.500476 kubelet[2000]: I0209 09:48:41.500271 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ffb674a8-89e1-469a-9a8e-76ca472e529c-cni-net-dir\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.500476 kubelet[2000]: I0209 09:48:41.500404 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8-socket-dir\") pod \"csi-node-driver-s5z52\" (UID: \"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8\") " pod="calico-system/csi-node-driver-s5z52" Feb 9 09:48:41.500728 kubelet[2000]: I0209 09:48:41.500539 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdrv\" (UniqueName: \"kubernetes.io/projected/94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8-kube-api-access-gmdrv\") pod \"csi-node-driver-s5z52\" (UID: \"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8\") " pod="calico-system/csi-node-driver-s5z52" Feb 9 09:48:41.500728 kubelet[2000]: I0209 09:48:41.500632 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20cd0fa7-3de4-493f-b032-37a3b24fce24-xtables-lock\") pod \"kube-proxy-hhv6c\" (UID: \"20cd0fa7-3de4-493f-b032-37a3b24fce24\") " pod="kube-system/kube-proxy-hhv6c" Feb 9 09:48:41.500728 kubelet[2000]: I0209 09:48:41.500708 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ffb674a8-89e1-469a-9a8e-76ca472e529c-var-lib-calico\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.501027 kubelet[2000]: I0209 09:48:41.500860 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8-kubelet-dir\") pod \"csi-node-driver-s5z52\" (UID: \"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8\") " pod="calico-system/csi-node-driver-s5z52" Feb 9 09:48:41.501027 kubelet[2000]: I0209 09:48:41.500960 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47795\" (UniqueName: \"kubernetes.io/projected/20cd0fa7-3de4-493f-b032-37a3b24fce24-kube-api-access-47795\") pod \"kube-proxy-hhv6c\" (UID: \"20cd0fa7-3de4-493f-b032-37a3b24fce24\") " pod="kube-system/kube-proxy-hhv6c" Feb 9 09:48:41.501243 kubelet[2000]: I0209 09:48:41.501073 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb674a8-89e1-469a-9a8e-76ca472e529c-tigera-ca-bundle\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.501243 kubelet[2000]: I0209 09:48:41.501163 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ffb674a8-89e1-469a-9a8e-76ca472e529c-cni-bin-dir\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.501456 kubelet[2000]: I0209 09:48:41.501256 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ffb674a8-89e1-469a-9a8e-76ca472e529c-cni-log-dir\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.501456 kubelet[2000]: I0209 09:48:41.501364 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ffb674a8-89e1-469a-9a8e-76ca472e529c-flexvol-driver-host\") pod \"calico-node-xdqjg\" (UID: \"ffb674a8-89e1-469a-9a8e-76ca472e529c\") " pod="calico-system/calico-node-xdqjg" Feb 9 09:48:41.501690 kubelet[2000]: I0209 09:48:41.501466 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8-varrun\") pod \"csi-node-driver-s5z52\" (UID: \"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8\") " pod="calico-system/csi-node-driver-s5z52" Feb 9 09:48:41.501690 kubelet[2000]: I0209 09:48:41.501635 2000 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:48:41.605285 kubelet[2000]: E0209 09:48:41.605192 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.605285 kubelet[2000]: W0209 09:48:41.605235 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.605285 kubelet[2000]: E0209 09:48:41.605284 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.606122 kubelet[2000]: E0209 09:48:41.606050 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.606122 kubelet[2000]: W0209 09:48:41.606082 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.606122 kubelet[2000]: E0209 09:48:41.606123 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.610186 kubelet[2000]: E0209 09:48:41.610148 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.610186 kubelet[2000]: W0209 09:48:41.610154 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.610186 kubelet[2000]: E0209 09:48:41.610161 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.706109 kubelet[2000]: E0209 09:48:41.705904 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.706109 kubelet[2000]: W0209 09:48:41.705945 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.706109 kubelet[2000]: E0209 09:48:41.705991 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.706698 kubelet[2000]: E0209 09:48:41.706615 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.706698 kubelet[2000]: W0209 09:48:41.706648 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.706698 kubelet[2000]: E0209 09:48:41.706686 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.707285 kubelet[2000]: E0209 09:48:41.707226 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.707285 kubelet[2000]: W0209 09:48:41.707260 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.707285 kubelet[2000]: E0209 09:48:41.707297 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.809065 kubelet[2000]: E0209 09:48:41.808975 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.809065 kubelet[2000]: W0209 09:48:41.809014 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.809065 kubelet[2000]: E0209 09:48:41.809058 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.809681 kubelet[2000]: E0209 09:48:41.809603 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.809681 kubelet[2000]: W0209 09:48:41.809632 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.809681 kubelet[2000]: E0209 09:48:41.809666 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.810263 kubelet[2000]: E0209 09:48:41.810175 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.810263 kubelet[2000]: W0209 09:48:41.810213 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.810263 kubelet[2000]: E0209 09:48:41.810271 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.811250 kubelet[2000]: E0209 09:48:41.811215 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.811250 kubelet[2000]: W0209 09:48:41.811221 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.811250 kubelet[2000]: E0209 09:48:41.811228 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.912363 kubelet[2000]: E0209 09:48:41.912268 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.912363 kubelet[2000]: W0209 09:48:41.912308 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.912363 kubelet[2000]: E0209 09:48:41.912353 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:41.913034 kubelet[2000]: E0209 09:48:41.912956 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:41.913034 kubelet[2000]: W0209 09:48:41.912990 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:41.913034 kubelet[2000]: E0209 09:48:41.913029 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:42.001910 env[1550]: time="2024-02-09T09:48:42.001675902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xdqjg,Uid:ffb674a8-89e1-469a-9a8e-76ca472e529c,Namespace:calico-system,Attempt:0,}" Feb 9 09:48:42.010908 kubelet[2000]: E0209 09:48:42.010870 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:42.010908 kubelet[2000]: W0209 09:48:42.010879 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:42.010908 kubelet[2000]: E0209 09:48:42.010890 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:42.014197 kubelet[2000]: E0209 09:48:42.014155 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:42.014197 kubelet[2000]: W0209 09:48:42.014161 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:42.014197 kubelet[2000]: E0209 09:48:42.014169 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:42.116081 kubelet[2000]: E0209 09:48:42.115977 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:42.116081 kubelet[2000]: W0209 09:48:42.116019 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:42.116081 kubelet[2000]: E0209 09:48:42.116066 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:42.203698 kubelet[2000]: E0209 09:48:42.203683 2000 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:48:42.203698 kubelet[2000]: W0209 09:48:42.203694 2000 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:48:42.203807 kubelet[2000]: E0209 09:48:42.203707 2000 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:48:42.300919 env[1550]: time="2024-02-09T09:48:42.300761338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhv6c,Uid:20cd0fa7-3de4-493f-b032-37a3b24fce24,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:42.391861 kubelet[2000]: E0209 09:48:42.391753 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:42.896746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473252069.mount: Deactivated successfully. Feb 9 09:48:42.897953 env[1550]: time="2024-02-09T09:48:42.897933231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:42.899284 env[1550]: time="2024-02-09T09:48:42.899270933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:42.899746 env[1550]: time="2024-02-09T09:48:42.899717285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:42.900670 env[1550]: time="2024-02-09T09:48:42.900659709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:42.901875 env[1550]: time="2024-02-09T09:48:42.901866215Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:42.902190 env[1550]: time="2024-02-09T09:48:42.902182456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:42.902622 env[1550]: time="2024-02-09T09:48:42.902582377Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:42.903632 env[1550]: time="2024-02-09T09:48:42.903621563Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:42.909870 env[1550]: time="2024-02-09T09:48:42.909837968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:42.909870 env[1550]: time="2024-02-09T09:48:42.909861341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:42.909870 env[1550]: time="2024-02-09T09:48:42.909868322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:42.909991 env[1550]: time="2024-02-09T09:48:42.909947464Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7986e1b4902c152fb39ba0e77ac63fcbe34314358fd5af643e521e6506c3bc56 pid=2127 runtime=io.containerd.runc.v2 Feb 9 09:48:42.910616 env[1550]: time="2024-02-09T09:48:42.910594760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:42.910616 env[1550]: time="2024-02-09T09:48:42.910611064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:42.910675 env[1550]: time="2024-02-09T09:48:42.910617557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:42.910694 env[1550]: time="2024-02-09T09:48:42.910672057Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2929806b4fe37798de420db9214bbfacdabb65333033aa461f4f05ea337cc261 pid=2136 runtime=io.containerd.runc.v2 Feb 9 09:48:42.936958 env[1550]: time="2024-02-09T09:48:42.936933636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xdqjg,Uid:ffb674a8-89e1-469a-9a8e-76ca472e529c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7986e1b4902c152fb39ba0e77ac63fcbe34314358fd5af643e521e6506c3bc56\"" Feb 9 09:48:42.937849 env[1550]: time="2024-02-09T09:48:42.937834389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 09:48:42.938097 env[1550]: time="2024-02-09T09:48:42.938081185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhv6c,Uid:20cd0fa7-3de4-493f-b032-37a3b24fce24,Namespace:kube-system,Attempt:0,} returns sandbox id \"2929806b4fe37798de420db9214bbfacdabb65333033aa461f4f05ea337cc261\"" Feb 9 09:48:42.950702 kubelet[2000]: E0209 09:48:42.950663 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:43.392839 kubelet[2000]: E0209 09:48:43.392737 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:44.379514 kubelet[2000]: E0209 09:48:44.379388 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:44.393704 kubelet[2000]: E0209 09:48:44.393640 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:44.951178 kubelet[2000]: E0209 09:48:44.951124 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:45.394668 kubelet[2000]: E0209 09:48:45.394562 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:46.395271 kubelet[2000]: E0209 09:48:46.395162 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:46.650180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266703781.mount: Deactivated successfully. Feb 9 09:48:46.951770 kubelet[2000]: E0209 09:48:46.951553 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:47.395512 kubelet[2000]: E0209 09:48:47.395373 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:48.396663 kubelet[2000]: E0209 09:48:48.396552 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:48.951238 kubelet[2000]: E0209 09:48:48.951129 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:49.397339 kubelet[2000]: E0209 09:48:49.397279 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:50.398118 kubelet[2000]: E0209 09:48:50.397976 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:50.950823 kubelet[2000]: E0209 09:48:50.950725 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:51.398297 kubelet[2000]: E0209 09:48:51.398154 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:52.349340 env[1550]: time="2024-02-09T09:48:52.349319715Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:52.350435 env[1550]: time="2024-02-09T09:48:52.350411165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:52.351379 env[1550]: time="2024-02-09T09:48:52.351367831Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:52.352697 env[1550]: time="2024-02-09T09:48:52.352684925Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:52.353149 env[1550]: time="2024-02-09T09:48:52.353137783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 09:48:52.353683 env[1550]: time="2024-02-09T09:48:52.353669188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:48:52.354555 env[1550]: time="2024-02-09T09:48:52.354505268Z" level=info msg="CreateContainer within sandbox \"7986e1b4902c152fb39ba0e77ac63fcbe34314358fd5af643e521e6506c3bc56\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 09:48:52.359631 env[1550]: time="2024-02-09T09:48:52.359586296Z" level=info msg="CreateContainer within sandbox \"7986e1b4902c152fb39ba0e77ac63fcbe34314358fd5af643e521e6506c3bc56\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"949ac08373ac545ae147e4afbce2a516dd04d417731cb81f5892a7a786075fab\"" Feb 9 09:48:52.359859 env[1550]: time="2024-02-09T09:48:52.359844971Z" level=info msg="StartContainer for \"949ac08373ac545ae147e4afbce2a516dd04d417731cb81f5892a7a786075fab\"" Feb 9 09:48:52.399065 kubelet[2000]: E0209 09:48:52.399022 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:52.415894 env[1550]: time="2024-02-09T09:48:52.415845986Z" level=info msg="StartContainer for \"949ac08373ac545ae147e4afbce2a516dd04d417731cb81f5892a7a786075fab\" returns successfully" Feb 9 09:48:52.477635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-949ac08373ac545ae147e4afbce2a516dd04d417731cb81f5892a7a786075fab-rootfs.mount: Deactivated successfully. Feb 9 09:48:52.551184 env[1550]: time="2024-02-09T09:48:52.551036041Z" level=info msg="shim disconnected" id=949ac08373ac545ae147e4afbce2a516dd04d417731cb81f5892a7a786075fab Feb 9 09:48:52.551184 env[1550]: time="2024-02-09T09:48:52.551139044Z" level=warning msg="cleaning up after shim disconnected" id=949ac08373ac545ae147e4afbce2a516dd04d417731cb81f5892a7a786075fab namespace=k8s.io Feb 9 09:48:52.551184 env[1550]: time="2024-02-09T09:48:52.551169875Z" level=info msg="cleaning up dead shim" Feb 9 09:48:52.570564 env[1550]: time="2024-02-09T09:48:52.570511236Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2242 runtime=io.containerd.runc.v2\n" Feb 9 09:48:52.951055 kubelet[2000]: E0209 09:48:52.950943 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:53.399744 kubelet[2000]: E0209 09:48:53.399636 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:53.859762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483246810.mount: Deactivated successfully. Feb 9 09:48:54.148394 env[1550]: time="2024-02-09T09:48:54.148343157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:54.148988 env[1550]: time="2024-02-09T09:48:54.148932737Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:54.149626 env[1550]: time="2024-02-09T09:48:54.149613677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:54.150630 env[1550]: time="2024-02-09T09:48:54.150618050Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:54.150890 env[1550]: time="2024-02-09T09:48:54.150860367Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 09:48:54.151306 env[1550]: time="2024-02-09T09:48:54.151246195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 09:48:54.152025 env[1550]: time="2024-02-09T09:48:54.151976626Z" level=info msg="CreateContainer within sandbox \"2929806b4fe37798de420db9214bbfacdabb65333033aa461f4f05ea337cc261\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:48:54.157582 env[1550]: time="2024-02-09T09:48:54.157520197Z" level=info msg="CreateContainer within sandbox \"2929806b4fe37798de420db9214bbfacdabb65333033aa461f4f05ea337cc261\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b41e20bc6aceb27a22422a39cdfb358bd2e20b6ac794d70c8e9982c8d020b15\"" Feb 9 09:48:54.157836 env[1550]: time="2024-02-09T09:48:54.157797604Z" level=info msg="StartContainer for \"7b41e20bc6aceb27a22422a39cdfb358bd2e20b6ac794d70c8e9982c8d020b15\"" Feb 9 09:48:54.218962 env[1550]: time="2024-02-09T09:48:54.218902079Z" level=info msg="StartContainer for \"7b41e20bc6aceb27a22422a39cdfb358bd2e20b6ac794d70c8e9982c8d020b15\" returns successfully" Feb 9 09:48:54.231000 audit[2326]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.231000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff465a81a0 a2=0 a3=7fff465a818c items=0 ppid=2277 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.386244 kernel: audit: type=1325 audit(1707472134.231:197): table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.386290 kernel: audit: type=1300 audit(1707472134.231:197): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff465a81a0 a2=0 a3=7fff465a818c items=0 ppid=2277 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.386307 kernel: audit: type=1327 audit(1707472134.231:197): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:48:54.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:48:54.400354 kubelet[2000]: E0209 09:48:54.400293 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:54.444108 kernel: audit: type=1325 audit(1707472134.231:198): table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.231000 audit[2327]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.502098 kernel: audit: type=1300 audit(1707472134.231:198): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd82c4f260 a2=0 a3=7ffd82c4f24c items=0 ppid=2277 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.231000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd82c4f260 a2=0 a3=7ffd82c4f24c items=0 ppid=2277 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.598811 kernel: audit: type=1327 audit(1707472134.231:198): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:48:54.231000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:48:54.232000 audit[2328]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.715354 kernel: audit: type=1325 audit(1707472134.232:199): table=nat:37 family=2 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.715386 kernel: audit: type=1300 audit(1707472134.232:199): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd54f19090 a2=0 a3=7ffd54f1907c items=0 ppid=2277 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.232000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd54f19090 a2=0 a3=7ffd54f1907c items=0 ppid=2277 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.811925 kernel: audit: type=1327 audit(1707472134.232:199): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:48:54.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:48:54.870147 kernel: audit: type=1325 audit(1707472134.232:200): table=nat:38 family=10 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.232000 audit[2329]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.232000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0209b880 a2=0 a3=7ffd0209b86c items=0 ppid=2277 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:48:54.232000 audit[2330]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.232000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffccafeeb70 a2=0 a3=7ffccafeeb5c items=0 ppid=2277 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 09:48:54.232000 audit[2331]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.232000 audit[2331]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce577aca0 a2=0 a3=7ffce577ac8c items=0 ppid=2277 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 09:48:54.333000 audit[2334]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.333000 audit[2334]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffff0acae20 a2=0 a3=7ffff0acae0c items=0 ppid=2277 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 09:48:54.335000 audit[2336]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.335000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffde7ad3d00 a2=0 a3=7ffde7ad3cec items=0 ppid=2277 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 09:48:54.337000 audit[2339]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.337000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffc9713c70 a2=0 a3=7fffc9713c5c items=0 ppid=2277 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 09:48:54.337000 audit[2340]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.337000 audit[2340]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe226c4c80 a2=0 a3=7ffe226c4c6c items=0 ppid=2277 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 09:48:54.338000 audit[2342]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.338000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2832a9e0 a2=0 a3=7ffe2832a9cc items=0 ppid=2277 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 09:48:54.339000 audit[2343]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.339000 audit[2343]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd18efac60 a2=0 a3=7ffd18efac4c items=0 ppid=2277 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 09:48:54.340000 audit[2345]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.340000 audit[2345]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe640c0370 a2=0 a3=7ffe640c035c items=0 ppid=2277 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 09:48:54.342000 audit[2348]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.342000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd1bbb45d0 a2=0 a3=7ffd1bbb45bc items=0 ppid=2277 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 09:48:54.343000 audit[2349]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.343000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbdcc0910 a2=0 a3=7ffdbdcc08fc items=0 ppid=2277 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.343000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 09:48:54.344000 audit[2351]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.344000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff9561b960 a2=0 a3=7fff9561b94c items=0 ppid=2277 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 09:48:54.345000 audit[2352]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.345000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc69e0bce0 a2=0 a3=7ffc69e0bccc items=0 ppid=2277 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 09:48:54.445000 audit[2354]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.445000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc471ac200 a2=0 a3=7ffc471ac1ec items=0 ppid=2277 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:48:54.930000 audit[2357]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.930000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6af93130 a2=0 a3=7ffc6af9311c items=0 ppid=2277 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.930000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:48:54.931000 audit[2360]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.931000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1f939230 a2=0 a3=7fff1f93921c items=0 ppid=2277 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 09:48:54.932000 audit[2361]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.932000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc5f2acb0 a2=0 a3=7ffdc5f2ac9c items=0 ppid=2277 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 09:48:54.933000 audit[2363]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.933000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc05a95120 a2=0 a3=7ffc05a9510c items=0 ppid=2277 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:48:54.935000 audit[2366]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:48:54.935000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe0a8b4e60 a2=0 a3=7ffe0a8b4e4c items=0 ppid=2277 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.935000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:48:54.941000 audit[2370]: NETFILTER_CFG table=filter:58 family=2 entries=7 op=nft_register_rule pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:54.941000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffea1ef35b0 a2=0 a3=7ffea1ef359c items=0 ppid=2277 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:54.950935 kubelet[2000]: E0209 09:48:54.950909 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:54.951000 audit[2370]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:48:54.951000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffea1ef35b0 a2=0 a3=7ffea1ef359c items=0 ppid=2277 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:54.968000 audit[2376]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.968000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd703f4310 a2=0 a3=7ffd703f42fc items=0 ppid=2277 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 09:48:54.975000 audit[2378]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.975000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff13c922b0 a2=0 a3=7fff13c9229c items=0 ppid=2277 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 09:48:54.984000 audit[2381]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.984000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe718fb5b0 a2=0 a3=7ffe718fb59c items=0 ppid=2277 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 09:48:54.987000 audit[2382]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.987000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc77224b70 a2=0 a3=7ffc77224b5c items=0 ppid=2277 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 09:48:54.993000 audit[2384]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.993000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff7dc12770 a2=0 a3=7fff7dc1275c items=0 ppid=2277 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 09:48:54.996000 audit[2385]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:54.996000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc5d67250 a2=0 a3=7ffcc5d6723c items=0 ppid=2277 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:54.996000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 09:48:55.002000 audit[2387]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.002000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb0d14460 a2=0 a3=7fffb0d1444c items=0 ppid=2277 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 09:48:55.011000 audit[2390]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.011000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdb60a19c0 a2=0 a3=7ffdb60a19ac items=0 ppid=2277 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 09:48:55.014000 audit[2391]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.014000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff957137a0 a2=0 a3=7fff9571378c items=0 ppid=2277 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 09:48:55.020000 audit[2393]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.020000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc8e741f0 a2=0 a3=7ffdc8e741dc items=0 ppid=2277 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.020000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 09:48:55.023000 audit[2394]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.023000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4b390930 a2=0 a3=7ffd4b39091c items=0 ppid=2277 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 09:48:55.030000 audit[2396]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.030000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc263d63a0 a2=0 a3=7ffc263d638c items=0 ppid=2277 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.030000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:48:55.039000 audit[2399]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.039000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe241511c0 a2=0 a3=7ffe241511ac items=0 ppid=2277 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 09:48:55.048000 audit[2402]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.048000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff993d5850 a2=0 a3=7fff993d583c items=0 ppid=2277 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.048000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 09:48:55.050000 audit[2403]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.050000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd3f5c51b0 a2=0 a3=7ffd3f5c519c items=0 ppid=2277 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 09:48:55.056000 audit[2405]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.056000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff5e896a40 a2=0 a3=7fff5e896a2c items=0 ppid=2277 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.056000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:48:55.064000 audit[2408]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:48:55.064000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff34d34bc0 a2=0 a3=7fff34d34bac items=0 ppid=2277 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:48:55.078000 audit[2412]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 09:48:55.078000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcf5dad6d0 a2=0 a3=7ffcf5dad6bc items=0 ppid=2277 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.078000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:55.079000 audit[2412]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 09:48:55.079000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffcf5dad6d0 a2=0 a3=7ffcf5dad6bc items=0 ppid=2277 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:48:55.079000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:48:55.400991 kubelet[2000]: E0209 09:48:55.400901 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:55.831029 update_engine[1540]: I0209 09:48:55.830909 1540 update_attempter.cc:509] Updating boot flags... Feb 9 09:48:56.401803 kubelet[2000]: E0209 09:48:56.401696 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:56.950915 kubelet[2000]: E0209 09:48:56.950803 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:57.402352 kubelet[2000]: E0209 09:48:57.402203 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:57.779383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1415357725.mount: Deactivated successfully. Feb 9 09:48:58.402745 kubelet[2000]: E0209 09:48:58.402636 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:58.951072 kubelet[2000]: E0209 09:48:58.950972 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:48:59.402944 kubelet[2000]: E0209 09:48:59.402841 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:00.403880 kubelet[2000]: E0209 09:49:00.403760 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:00.950694 kubelet[2000]: E0209 09:49:00.950589 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:49:01.405072 kubelet[2000]: E0209 09:49:01.404956 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:02.405871 kubelet[2000]: E0209 09:49:02.405778 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:02.951449 kubelet[2000]: E0209 09:49:02.951351 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:49:03.406776 kubelet[2000]: E0209 09:49:03.406664 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:04.379010 kubelet[2000]: E0209 09:49:04.378894 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:04.407669 kubelet[2000]: E0209 09:49:04.407577 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:04.951155 kubelet[2000]: E0209 09:49:04.951056 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:49:05.408735 kubelet[2000]: E0209 09:49:05.408636 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:06.409049 kubelet[2000]: E0209 09:49:06.408945 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:06.950648 kubelet[2000]: E0209 09:49:06.950553 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:49:07.410189 kubelet[2000]: E0209 09:49:07.410065 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:08.411380 kubelet[2000]: E0209 09:49:08.411284 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:08.951108 kubelet[2000]: E0209 09:49:08.951088 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:49:09.411994 kubelet[2000]: E0209 09:49:09.411978 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:10.162954 env[1550]: time="2024-02-09T09:49:10.162910502Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:10.163431 env[1550]: time="2024-02-09T09:49:10.163392834Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:10.164393 env[1550]: time="2024-02-09T09:49:10.164355234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:10.165280 env[1550]: time="2024-02-09T09:49:10.165240220Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:10.165688 env[1550]: time="2024-02-09T09:49:10.165672851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 09:49:10.166894 env[1550]: time="2024-02-09T09:49:10.166881817Z" level=info msg="CreateContainer within sandbox \"7986e1b4902c152fb39ba0e77ac63fcbe34314358fd5af643e521e6506c3bc56\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 09:49:10.171459 env[1550]: time="2024-02-09T09:49:10.171443257Z" level=info msg="CreateContainer within sandbox \"7986e1b4902c152fb39ba0e77ac63fcbe34314358fd5af643e521e6506c3bc56\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1d2a2cd023b307608e7f2d0a9aa8e0b6ee87fb995ce8bc87d00c060c4bce1208\"" Feb 9 09:49:10.171723 env[1550]: time="2024-02-09T09:49:10.171691344Z" level=info msg="StartContainer for \"1d2a2cd023b307608e7f2d0a9aa8e0b6ee87fb995ce8bc87d00c060c4bce1208\"" Feb 9 09:49:10.208334 env[1550]: time="2024-02-09T09:49:10.208305092Z" level=info msg="StartContainer for \"1d2a2cd023b307608e7f2d0a9aa8e0b6ee87fb995ce8bc87d00c060c4bce1208\" returns successfully" Feb 9 09:49:10.412277 kubelet[2000]: E0209 09:49:10.412186 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:10.950609 kubelet[2000]: E0209 09:49:10.950591 2000 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:49:11.016582 kubelet[2000]: I0209 09:49:11.016505 2000 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:49:11.034287 kubelet[2000]: I0209 09:49:11.034179 2000 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hhv6c" podStartSLOduration=-9.22337200282072e+09 pod.CreationTimestamp="2024-02-09 09:48:37 +0000 UTC" firstStartedPulling="2024-02-09 09:48:42.93838697 +0000 UTC m=+18.797623253" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:55.03024139 +0000 UTC m=+30.889477769" watchObservedRunningTime="2024-02-09 09:49:11.034054794 +0000 UTC m=+46.893291125" Feb 9 09:49:11.034679 kubelet[2000]: I0209 09:49:11.034541 2000 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:49:11.035190 kubelet[2000]: I0209 09:49:11.035111 2000 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:49:11.035752 kubelet[2000]: I0209 09:49:11.035662 2000 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:49:11.117350 kubelet[2000]: I0209 09:49:11.117282 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34d54f83-8d39-40c0-9378-be0277a74132-config-volume\") pod \"coredns-787d4945fb-np7dd\" (UID: \"34d54f83-8d39-40c0-9378-be0277a74132\") " pod="kube-system/coredns-787d4945fb-np7dd" Feb 9 09:49:11.117714 kubelet[2000]: I0209 09:49:11.117577 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/627c20d1-39af-46cf-8f3a-2d45fb6f84bb-config-volume\") pod \"coredns-787d4945fb-jwb92\" (UID: \"627c20d1-39af-46cf-8f3a-2d45fb6f84bb\") " pod="kube-system/coredns-787d4945fb-jwb92" Feb 9 09:49:11.117714 kubelet[2000]: I0209 09:49:11.117687 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gfw5\" (UniqueName: \"kubernetes.io/projected/09b20649-bbc0-45d1-af93-aab9a21df100-kube-api-access-4gfw5\") pod \"calico-kube-controllers-cddd66c57-2zj4d\" (UID: \"09b20649-bbc0-45d1-af93-aab9a21df100\") " pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" Feb 9 09:49:11.118019 kubelet[2000]: I0209 09:49:11.117796 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b20649-bbc0-45d1-af93-aab9a21df100-tigera-ca-bundle\") pod \"calico-kube-controllers-cddd66c57-2zj4d\" (UID: \"09b20649-bbc0-45d1-af93-aab9a21df100\") " pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" Feb 9 09:49:11.118160 kubelet[2000]: I0209 09:49:11.118131 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjg5t\" (UniqueName: \"kubernetes.io/projected/34d54f83-8d39-40c0-9378-be0277a74132-kube-api-access-fjg5t\") pod \"coredns-787d4945fb-np7dd\" (UID: \"34d54f83-8d39-40c0-9378-be0277a74132\") " pod="kube-system/coredns-787d4945fb-np7dd" Feb 9 09:49:11.118295 kubelet[2000]: I0209 09:49:11.118211 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nvs5\" (UniqueName: \"kubernetes.io/projected/627c20d1-39af-46cf-8f3a-2d45fb6f84bb-kube-api-access-5nvs5\") pod \"coredns-787d4945fb-jwb92\" (UID: \"627c20d1-39af-46cf-8f3a-2d45fb6f84bb\") " pod="kube-system/coredns-787d4945fb-jwb92" Feb 9 09:49:11.174670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d2a2cd023b307608e7f2d0a9aa8e0b6ee87fb995ce8bc87d00c060c4bce1208-rootfs.mount: Deactivated successfully. Feb 9 09:49:11.341927 env[1550]: time="2024-02-09T09:49:11.341799578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddd66c57-2zj4d,Uid:09b20649-bbc0-45d1-af93-aab9a21df100,Namespace:calico-system,Attempt:0,}" Feb 9 09:49:11.343859 env[1550]: time="2024-02-09T09:49:11.343752671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-np7dd,Uid:34d54f83-8d39-40c0-9378-be0277a74132,Namespace:kube-system,Attempt:0,}" Feb 9 09:49:11.412955 kubelet[2000]: E0209 09:49:11.412859 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:11.645391 env[1550]: time="2024-02-09T09:49:11.645178489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jwb92,Uid:627c20d1-39af-46cf-8f3a-2d45fb6f84bb,Namespace:kube-system,Attempt:0,}" Feb 9 09:49:11.646033 env[1550]: time="2024-02-09T09:49:11.645932708Z" level=info msg="shim disconnected" id=1d2a2cd023b307608e7f2d0a9aa8e0b6ee87fb995ce8bc87d00c060c4bce1208 Feb 9 09:49:11.646222 env[1550]: time="2024-02-09T09:49:11.646042880Z" level=warning msg="cleaning up after shim disconnected" id=1d2a2cd023b307608e7f2d0a9aa8e0b6ee87fb995ce8bc87d00c060c4bce1208 namespace=k8s.io Feb 9 09:49:11.646222 env[1550]: time="2024-02-09T09:49:11.646081611Z" level=info msg="cleaning up dead shim" Feb 9 09:49:11.668273 env[1550]: time="2024-02-09T09:49:11.668242043Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:49:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2501 runtime=io.containerd.runc.v2\n" Feb 9 09:49:11.677845 env[1550]: time="2024-02-09T09:49:11.677799482Z" level=error msg="Failed to destroy network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.678078 env[1550]: time="2024-02-09T09:49:11.678052856Z" level=error msg="encountered an error cleaning up failed sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.678120 env[1550]: time="2024-02-09T09:49:11.678089488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jwb92,Uid:627c20d1-39af-46cf-8f3a-2d45fb6f84bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.678255 kubelet[2000]: E0209 09:49:11.678242 2000 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.678300 kubelet[2000]: E0209 09:49:11.678293 2000 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-jwb92" Feb 9 09:49:11.678329 kubelet[2000]: E0209 09:49:11.678318 2000 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-jwb92" Feb 9 09:49:11.678370 kubelet[2000]: E0209 09:49:11.678364 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-jwb92_kube-system(627c20d1-39af-46cf-8f3a-2d45fb6f84bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-jwb92_kube-system(627c20d1-39af-46cf-8f3a-2d45fb6f84bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-jwb92" podUID=627c20d1-39af-46cf-8f3a-2d45fb6f84bb Feb 9 09:49:11.678732 env[1550]: time="2024-02-09T09:49:11.678685986Z" level=error msg="Failed to destroy network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.678876 env[1550]: time="2024-02-09T09:49:11.678829258Z" level=error msg="encountered an error cleaning up failed sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.678876 env[1550]: time="2024-02-09T09:49:11.678853379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddd66c57-2zj4d,Uid:09b20649-bbc0-45d1-af93-aab9a21df100,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.678993 kubelet[2000]: E0209 09:49:11.678950 2000 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.678993 kubelet[2000]: E0209 09:49:11.678971 2000 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" Feb 9 09:49:11.678993 kubelet[2000]: E0209 09:49:11.678985 2000 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" Feb 9 09:49:11.679064 kubelet[2000]: E0209 09:49:11.679011 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddd66c57-2zj4d_calico-system(09b20649-bbc0-45d1-af93-aab9a21df100)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddd66c57-2zj4d_calico-system(09b20649-bbc0-45d1-af93-aab9a21df100)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" podUID=09b20649-bbc0-45d1-af93-aab9a21df100 Feb 9 09:49:11.679102 env[1550]: time="2024-02-09T09:49:11.678984942Z" level=error msg="Failed to destroy network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.679124 env[1550]: time="2024-02-09T09:49:11.679107918Z" level=error msg="encountered an error cleaning up failed sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.679146 env[1550]: time="2024-02-09T09:49:11.679125266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-np7dd,Uid:34d54f83-8d39-40c0-9378-be0277a74132,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.679197 kubelet[2000]: E0209 09:49:11.679190 2000 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:11.679219 kubelet[2000]: E0209 09:49:11.679208 2000 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-np7dd" Feb 9 09:49:11.679241 kubelet[2000]: E0209 09:49:11.679221 2000 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-np7dd" Feb 9 09:49:11.679261 kubelet[2000]: E0209 09:49:11.679241 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-np7dd_kube-system(34d54f83-8d39-40c0-9378-be0277a74132)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-np7dd_kube-system(34d54f83-8d39-40c0-9378-be0277a74132)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-np7dd" podUID=34d54f83-8d39-40c0-9378-be0277a74132 Feb 9 09:49:12.074052 kubelet[2000]: I0209 09:49:12.073974 2000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:49:12.076111 env[1550]: time="2024-02-09T09:49:12.076093118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 09:49:12.076111 env[1550]: time="2024-02-09T09:49:12.076093135Z" level=info msg="StopPodSandbox for \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\"" Feb 9 09:49:12.076293 kubelet[2000]: I0209 09:49:12.076261 2000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:49:12.076514 env[1550]: time="2024-02-09T09:49:12.076502486Z" level=info msg="StopPodSandbox for \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\"" Feb 9 09:49:12.076641 kubelet[2000]: I0209 09:49:12.076634 2000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:49:12.076818 env[1550]: time="2024-02-09T09:49:12.076807187Z" level=info msg="StopPodSandbox for \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\"" Feb 9 09:49:12.088912 env[1550]: time="2024-02-09T09:49:12.088873063Z" level=error msg="StopPodSandbox for \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\" failed" error="failed to destroy network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:12.089009 kubelet[2000]: E0209 09:49:12.088991 2000 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:49:12.089043 kubelet[2000]: E0209 09:49:12.089023 2000 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0} Feb 9 09:49:12.089067 kubelet[2000]: E0209 09:49:12.089045 2000 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34d54f83-8d39-40c0-9378-be0277a74132\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:49:12.089067 kubelet[2000]: E0209 09:49:12.089061 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34d54f83-8d39-40c0-9378-be0277a74132\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-np7dd" podUID=34d54f83-8d39-40c0-9378-be0277a74132 Feb 9 09:49:12.092013 env[1550]: time="2024-02-09T09:49:12.091991496Z" level=error msg="StopPodSandbox for \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\" failed" error="failed to destroy network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:12.092113 kubelet[2000]: E0209 09:49:12.092104 2000 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:49:12.092151 kubelet[2000]: E0209 09:49:12.092122 2000 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2} Feb 9 09:49:12.092151 kubelet[2000]: E0209 09:49:12.092143 2000 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09b20649-bbc0-45d1-af93-aab9a21df100\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:49:12.092220 kubelet[2000]: E0209 09:49:12.092158 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09b20649-bbc0-45d1-af93-aab9a21df100\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" podUID=09b20649-bbc0-45d1-af93-aab9a21df100 Feb 9 09:49:12.092261 env[1550]: time="2024-02-09T09:49:12.092219011Z" level=error msg="StopPodSandbox for \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\" failed" error="failed to destroy network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:12.092324 kubelet[2000]: E0209 09:49:12.092320 2000 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:49:12.092353 kubelet[2000]: E0209 09:49:12.092328 2000 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298} Feb 9 09:49:12.092353 kubelet[2000]: E0209 09:49:12.092346 2000 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"627c20d1-39af-46cf-8f3a-2d45fb6f84bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:49:12.092428 kubelet[2000]: E0209 09:49:12.092359 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"627c20d1-39af-46cf-8f3a-2d45fb6f84bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-jwb92" podUID=627c20d1-39af-46cf-8f3a-2d45fb6f84bb Feb 9 09:49:12.179436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2-shm.mount: Deactivated successfully. Feb 9 09:49:12.179863 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0-shm.mount: Deactivated successfully. Feb 9 09:49:12.413850 kubelet[2000]: E0209 09:49:12.413675 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:12.953886 env[1550]: time="2024-02-09T09:49:12.953846540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5z52,Uid:94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8,Namespace:calico-system,Attempt:0,}" Feb 9 09:49:12.981180 env[1550]: time="2024-02-09T09:49:12.981116156Z" level=error msg="Failed to destroy network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:12.981363 env[1550]: time="2024-02-09T09:49:12.981342507Z" level=error msg="encountered an error cleaning up failed sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:12.981410 env[1550]: time="2024-02-09T09:49:12.981375145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5z52,Uid:94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:12.981572 kubelet[2000]: E0209 09:49:12.981525 2000 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:12.981572 kubelet[2000]: E0209 09:49:12.981569 2000 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5z52" Feb 9 09:49:12.981662 kubelet[2000]: E0209 09:49:12.981586 2000 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5z52" Feb 9 09:49:12.981662 kubelet[2000]: E0209 09:49:12.981629 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s5z52_calico-system(94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s5z52_calico-system(94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:49:12.983124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d-shm.mount: Deactivated successfully. Feb 9 09:49:13.081300 kubelet[2000]: I0209 09:49:13.081199 2000 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:49:13.082395 env[1550]: time="2024-02-09T09:49:13.082280670Z" level=info msg="StopPodSandbox for \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\"" Feb 9 09:49:13.134288 env[1550]: time="2024-02-09T09:49:13.134186280Z" level=error msg="StopPodSandbox for \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\" failed" error="failed to destroy network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:13.134449 kubelet[2000]: E0209 09:49:13.134430 2000 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:49:13.134526 kubelet[2000]: E0209 09:49:13.134475 2000 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d} Feb 9 09:49:13.134526 kubelet[2000]: E0209 09:49:13.134524 2000 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:49:13.134649 kubelet[2000]: E0209 09:49:13.134566 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:49:13.414376 kubelet[2000]: E0209 09:49:13.414260 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:14.415700 kubelet[2000]: E0209 09:49:14.415587 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:15.415974 kubelet[2000]: E0209 09:49:15.415852 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:16.416609 kubelet[2000]: E0209 09:49:16.416507 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:17.417519 kubelet[2000]: E0209 09:49:17.417393 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:18.418749 kubelet[2000]: E0209 09:49:18.418630 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:19.419927 kubelet[2000]: E0209 09:49:19.419804 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:20.421067 kubelet[2000]: E0209 09:49:20.420983 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:21.422206 kubelet[2000]: E0209 09:49:21.422083 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:22.423073 kubelet[2000]: E0209 09:49:22.422954 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:22.954787 env[1550]: time="2024-02-09T09:49:22.954700438Z" level=info msg="StopPodSandbox for \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\"" Feb 9 09:49:22.967834 env[1550]: time="2024-02-09T09:49:22.967774222Z" level=error msg="StopPodSandbox for \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\" failed" error="failed to destroy network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:22.967981 kubelet[2000]: E0209 09:49:22.967943 2000 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:49:22.967981 kubelet[2000]: E0209 09:49:22.967968 2000 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298} Feb 9 09:49:22.968060 kubelet[2000]: E0209 09:49:22.967992 2000 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"627c20d1-39af-46cf-8f3a-2d45fb6f84bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:49:22.968060 kubelet[2000]: E0209 09:49:22.968011 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"627c20d1-39af-46cf-8f3a-2d45fb6f84bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-jwb92" podUID=627c20d1-39af-46cf-8f3a-2d45fb6f84bb Feb 9 09:49:23.423751 kubelet[2000]: E0209 09:49:23.423631 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:23.952467 env[1550]: time="2024-02-09T09:49:23.952367185Z" level=info msg="StopPodSandbox for \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\"" Feb 9 09:49:23.952467 env[1550]: time="2024-02-09T09:49:23.952367192Z" level=info msg="StopPodSandbox for \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\"" Feb 9 09:49:23.980158 env[1550]: time="2024-02-09T09:49:23.980123666Z" level=error msg="StopPodSandbox for \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\" failed" error="failed to destroy network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:23.980411 env[1550]: time="2024-02-09T09:49:23.980122779Z" level=error msg="StopPodSandbox for \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\" failed" error="failed to destroy network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:23.980437 kubelet[2000]: E0209 09:49:23.980312 2000 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:49:23.980437 kubelet[2000]: E0209 09:49:23.980340 2000 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2} Feb 9 09:49:23.980437 kubelet[2000]: E0209 09:49:23.980364 2000 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09b20649-bbc0-45d1-af93-aab9a21df100\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:49:23.980437 kubelet[2000]: E0209 09:49:23.980314 2000 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:49:23.980579 kubelet[2000]: E0209 09:49:23.980383 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09b20649-bbc0-45d1-af93-aab9a21df100\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" podUID=09b20649-bbc0-45d1-af93-aab9a21df100 Feb 9 09:49:23.980579 kubelet[2000]: E0209 09:49:23.980395 2000 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0} Feb 9 09:49:23.980579 kubelet[2000]: E0209 09:49:23.980413 2000 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34d54f83-8d39-40c0-9378-be0277a74132\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:49:23.980579 kubelet[2000]: E0209 09:49:23.980427 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34d54f83-8d39-40c0-9378-be0277a74132\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-np7dd" podUID=34d54f83-8d39-40c0-9378-be0277a74132 Feb 9 09:49:24.379390 kubelet[2000]: E0209 09:49:24.379280 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:24.423814 kubelet[2000]: E0209 09:49:24.423787 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:25.424511 kubelet[2000]: E0209 09:49:25.424433 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:26.424660 kubelet[2000]: E0209 09:49:26.424593 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:27.425541 kubelet[2000]: E0209 09:49:27.425450 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:27.951088 env[1550]: time="2024-02-09T09:49:27.950966656Z" level=info msg="StopPodSandbox for \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\"" Feb 9 09:49:27.965134 env[1550]: time="2024-02-09T09:49:27.965103335Z" level=error msg="StopPodSandbox for \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\" failed" error="failed to destroy network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:49:27.965262 kubelet[2000]: E0209 09:49:27.965250 2000 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:49:27.965292 kubelet[2000]: E0209 09:49:27.965269 2000 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d} Feb 9 09:49:27.965292 kubelet[2000]: E0209 09:49:27.965290 2000 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:49:27.965361 kubelet[2000]: E0209 09:49:27.965306 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s5z52" podUID=94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 Feb 9 09:49:28.426367 kubelet[2000]: E0209 09:49:28.426310 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:29.426924 kubelet[2000]: E0209 09:49:29.426865 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:29.658092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1555148585.mount: Deactivated successfully. Feb 9 09:49:29.680405 env[1550]: time="2024-02-09T09:49:29.680336274Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:29.680938 env[1550]: time="2024-02-09T09:49:29.680894893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:29.681639 env[1550]: time="2024-02-09T09:49:29.681599798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:29.682273 env[1550]: time="2024-02-09T09:49:29.682241739Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:29.682974 env[1550]: time="2024-02-09T09:49:29.682933569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 09:49:29.686795 env[1550]: time="2024-02-09T09:49:29.686782250Z" level=info msg="CreateContainer within sandbox \"7986e1b4902c152fb39ba0e77ac63fcbe34314358fd5af643e521e6506c3bc56\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 09:49:29.692294 env[1550]: time="2024-02-09T09:49:29.692246506Z" level=info msg="CreateContainer within sandbox \"7986e1b4902c152fb39ba0e77ac63fcbe34314358fd5af643e521e6506c3bc56\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7779f33c55fa5c629691bdef17eec92b87d1c74436a9a95d2a4df847e24d286a\"" Feb 9 09:49:29.692593 env[1550]: time="2024-02-09T09:49:29.692551157Z" level=info msg="StartContainer for \"7779f33c55fa5c629691bdef17eec92b87d1c74436a9a95d2a4df847e24d286a\"" Feb 9 09:49:29.747506 env[1550]: time="2024-02-09T09:49:29.747418290Z" level=info msg="StartContainer for \"7779f33c55fa5c629691bdef17eec92b87d1c74436a9a95d2a4df847e24d286a\" returns successfully" Feb 9 09:49:29.880827 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 09:49:29.880887 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 09:49:30.153283 kubelet[2000]: I0209 09:49:30.153258 2000 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-xdqjg" podStartSLOduration=-9.22337198370157e+09 pod.CreationTimestamp="2024-02-09 09:48:37 +0000 UTC" firstStartedPulling="2024-02-09 09:48:42.937619978 +0000 UTC m=+18.796856264" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:49:30.152973032 +0000 UTC m=+66.012209333" watchObservedRunningTime="2024-02-09 09:49:30.153204734 +0000 UTC m=+66.012441035" Feb 9 09:49:30.427297 kubelet[2000]: E0209 09:49:30.427089 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:31.244000 audit[3145]: AVC avc: denied { write } for pid=3145 comm="tee" name="fd" dev="proc" ino=8637 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.272345 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 9 09:49:31.272417 kernel: audit: type=1400 audit(1707472171.244:241): avc: denied { write } for pid=3145 comm="tee" name="fd" dev="proc" ino=8637 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.244000 audit[3145]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe0a24c96a a2=241 a3=1b6 items=1 ppid=3113 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:31.428270 kubelet[2000]: E0209 09:49:31.428220 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:31.431192 kernel: audit: type=1300 audit(1707472171.244:241): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe0a24c96a a2=241 a3=1b6 items=1 ppid=3113 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:31.431218 kernel: audit: type=1307 audit(1707472171.244:241): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 09:49:31.244000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 09:49:31.462567 kernel: audit: type=1302 audit(1707472171.244:241): item=0 name="/dev/fd/63" inode=8634 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:49:31.244000 audit: PATH item=0 name="/dev/fd/63" inode=8634 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:49:31.526704 kernel: audit: type=1327 audit(1707472171.244:241): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:49:31.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:49:31.587951 kernel: audit: type=1400 audit(1707472171.244:242): avc: denied { write } for pid=3151 comm="tee" name="fd" dev="proc" ino=17117 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.244000 audit[3151]: AVC avc: denied { write } for pid=3151 comm="tee" name="fd" dev="proc" ino=17117 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.651768 kernel: audit: type=1400 audit(1707472171.244:243): avc: denied { write } for pid=3146 comm="tee" name="fd" dev="proc" ino=33897 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.244000 audit[3146]: AVC avc: denied { write } for pid=3146 comm="tee" name="fd" dev="proc" ino=33897 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.715722 kernel: audit: type=1300 audit(1707472171.244:242): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffd738697a a2=241 a3=1b6 items=1 ppid=3111 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:31.244000 audit[3151]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffd738697a a2=241 a3=1b6 items=1 ppid=3111 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:31.810798 kernel: audit: type=1307 audit(1707472171.244:242): cwd="/etc/service/enabled/bird6/log" Feb 9 09:49:31.244000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 09:49:31.840485 kernel: audit: type=1302 audit(1707472171.244:242): item=0 name="/dev/fd/63" inode=17114 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:49:31.244000 audit: PATH item=0 name="/dev/fd/63" inode=17114 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:49:31.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:49:31.244000 audit[3146]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc2678f97a a2=241 a3=1b6 items=1 ppid=3114 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:31.244000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 09:49:31.244000 audit: PATH item=0 name="/dev/fd/63" inode=33894 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:49:31.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:49:31.244000 audit[3156]: AVC avc: denied { write } for pid=3156 comm="tee" name="fd" dev="proc" ino=30459 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.244000 audit[3156]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc2d11197b a2=241 a3=1b6 items=1 ppid=3117 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:31.244000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 09:49:31.244000 audit: PATH item=0 name="/dev/fd/63" inode=30456 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:49:31.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:49:31.244000 audit[3154]: AVC avc: denied { write } for pid=3154 comm="tee" name="fd" dev="proc" ino=21016 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.244000 audit[3154]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeef35a97c a2=241 a3=1b6 items=1 ppid=3112 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:31.244000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 09:49:31.244000 audit: PATH item=0 name="/dev/fd/63" inode=21013 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:49:31.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:49:31.244000 audit[3155]: AVC avc: denied { write } for pid=3155 comm="tee" name="fd" dev="proc" ino=28358 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.244000 audit[3155]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4ec3396b a2=241 a3=1b6 items=1 ppid=3115 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:31.244000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 09:49:31.244000 audit: PATH item=0 name="/dev/fd/63" inode=15049 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:49:31.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:49:31.245000 audit[3160]: AVC avc: denied { write } for pid=3160 comm="tee" name="fd" dev="proc" ino=33901 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:49:31.245000 audit[3160]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffddef5197a a2=241 a3=1b6 items=1 ppid=3116 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:31.245000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 09:49:31.245000 audit: PATH item=0 name="/dev/fd/63" inode=29084 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:49:31.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:49:32.429372 kubelet[2000]: E0209 09:49:32.429249 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:33.429653 kubelet[2000]: E0209 09:49:33.429542 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:34.430271 kubelet[2000]: E0209 09:49:34.430226 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:34.952686 env[1550]: time="2024-02-09T09:49:34.952552140Z" level=info msg="StopPodSandbox for \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\"" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.025 [INFO][3357] k8s.go 578: Cleaning up netns ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.025 [INFO][3357] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" iface="eth0" netns="/var/run/netns/cni-694541e4-cca3-27af-a88c-9ce952fb2462" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.025 [INFO][3357] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" iface="eth0" netns="/var/run/netns/cni-694541e4-cca3-27af-a88c-9ce952fb2462" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.026 [INFO][3357] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" iface="eth0" netns="/var/run/netns/cni-694541e4-cca3-27af-a88c-9ce952fb2462" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.026 [INFO][3357] k8s.go 585: Releasing IP address(es) ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.026 [INFO][3357] utils.go 188: Calico CNI releasing IP address ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.047 [INFO][3375] ipam_plugin.go 415: Releasing address using handleID ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" HandleID="k8s-pod-network.f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.047 [INFO][3375] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.047 [INFO][3375] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.057 [WARNING][3375] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" HandleID="k8s-pod-network.f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.057 [INFO][3375] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" HandleID="k8s-pod-network.f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.058 [INFO][3375] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:49:35.061050 env[1550]: 2024-02-09 09:49:35.060 [INFO][3357] k8s.go 591: Teardown processing complete. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:49:35.061576 env[1550]: time="2024-02-09T09:49:35.061096867Z" level=info msg="TearDown network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\" successfully" Feb 9 09:49:35.061576 env[1550]: time="2024-02-09T09:49:35.061133524Z" level=info msg="StopPodSandbox for \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\" returns successfully" Feb 9 09:49:35.061823 env[1550]: time="2024-02-09T09:49:35.061771822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddd66c57-2zj4d,Uid:09b20649-bbc0-45d1-af93-aab9a21df100,Namespace:calico-system,Attempt:1,}" Feb 9 09:49:35.064276 systemd[1]: run-netns-cni\x2d694541e4\x2dcca3\x2d27af\x2da88c\x2d9ce952fb2462.mount: Deactivated successfully. Feb 9 09:49:35.270493 kernel: Initializing XFRM netlink socket Feb 9 09:49:35.275508 systemd-networkd[1407]: cali22b4c084c44: Link UP Feb 9 09:49:35.330702 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:49:35.330775 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali22b4c084c44: link becomes ready Feb 9 09:49:35.330826 systemd-networkd[1407]: cali22b4c084c44: Gained carrier Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.083 [INFO][3391] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.104 [INFO][3391] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0 calico-kube-controllers-cddd66c57- calico-system 09b20649-bbc0-45d1-af93-aab9a21df100 1556 0 2024-02-09 09:41:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cddd66c57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.67.80.11 calico-kube-controllers-cddd66c57-2zj4d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali22b4c084c44 [] []}} ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Namespace="calico-system" Pod="calico-kube-controllers-cddd66c57-2zj4d" WorkloadEndpoint="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.105 [INFO][3391] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Namespace="calico-system" Pod="calico-kube-controllers-cddd66c57-2zj4d" WorkloadEndpoint="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.164 [INFO][3413] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" HandleID="k8s-pod-network.384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.184 [INFO][3413] ipam_plugin.go 268: Auto assigning IP ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" HandleID="k8s-pod-network.384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b93d0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.67.80.11", "pod":"calico-kube-controllers-cddd66c57-2zj4d", "timestamp":"2024-02-09 09:49:35.164133933 +0000 UTC"}, Hostname:"10.67.80.11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.184 [INFO][3413] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.184 [INFO][3413] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.184 [INFO][3413] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.11' Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.187 [INFO][3413] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" host="10.67.80.11" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.193 [INFO][3413] ipam.go 372: Looking up existing affinities for host host="10.67.80.11" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.198 [INFO][3413] ipam.go 489: Trying affinity for 192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.200 [INFO][3413] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.202 [INFO][3413] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.202 [INFO][3413] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" host="10.67.80.11" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.204 [INFO][3413] ipam.go 1682: Creating new handle: k8s-pod-network.384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3 Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.208 [INFO][3413] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" host="10.67.80.11" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.213 [INFO][3413] ipam.go 1216: Successfully claimed IPs: [192.168.5.65/26] block=192.168.5.64/26 handle="k8s-pod-network.384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" host="10.67.80.11" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.213 [INFO][3413] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.65/26] handle="k8s-pod-network.384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" host="10.67.80.11" Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.213 [INFO][3413] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:49:35.336548 env[1550]: 2024-02-09 09:49:35.213 [INFO][3413] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.5.65/26] IPv6=[] ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" HandleID="k8s-pod-network.384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.336949 env[1550]: 2024-02-09 09:49:35.220 [INFO][3391] k8s.go 385: Populated endpoint ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Namespace="calico-system" Pod="calico-kube-controllers-cddd66c57-2zj4d" WorkloadEndpoint="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0", GenerateName:"calico-kube-controllers-cddd66c57-", Namespace:"calico-system", SelfLink:"", UID:"09b20649-bbc0-45d1-af93-aab9a21df100", ResourceVersion:"1556", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cddd66c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"", Pod:"calico-kube-controllers-cddd66c57-2zj4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22b4c084c44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:49:35.336949 env[1550]: 2024-02-09 09:49:35.220 [INFO][3391] k8s.go 386: Calico CNI using IPs: [192.168.5.65/32] ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Namespace="calico-system" Pod="calico-kube-controllers-cddd66c57-2zj4d" WorkloadEndpoint="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.336949 env[1550]: 2024-02-09 09:49:35.220 [INFO][3391] dataplane_linux.go 68: Setting the host side veth name to cali22b4c084c44 ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Namespace="calico-system" Pod="calico-kube-controllers-cddd66c57-2zj4d" WorkloadEndpoint="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.336949 env[1550]: 2024-02-09 09:49:35.330 [INFO][3391] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Namespace="calico-system" Pod="calico-kube-controllers-cddd66c57-2zj4d" WorkloadEndpoint="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.336949 env[1550]: 2024-02-09 09:49:35.331 [INFO][3391] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Namespace="calico-system" Pod="calico-kube-controllers-cddd66c57-2zj4d" WorkloadEndpoint="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0", GenerateName:"calico-kube-controllers-cddd66c57-", Namespace:"calico-system", SelfLink:"", UID:"09b20649-bbc0-45d1-af93-aab9a21df100", ResourceVersion:"1556", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cddd66c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3", Pod:"calico-kube-controllers-cddd66c57-2zj4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22b4c084c44", MAC:"42:32:18:b6:3c:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:49:35.336949 env[1550]: 2024-02-09 09:49:35.335 [INFO][3391] k8s.go 491: Wrote updated endpoint to datastore ContainerID="384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3" Namespace="calico-system" Pod="calico-kube-controllers-cddd66c57-2zj4d" WorkloadEndpoint="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:49:35.342387 env[1550]: time="2024-02-09T09:49:35.342330006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:49:35.342387 env[1550]: time="2024-02-09T09:49:35.342350975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:49:35.342387 env[1550]: time="2024-02-09T09:49:35.342357736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:49:35.342483 env[1550]: time="2024-02-09T09:49:35.342435896Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3 pid=3455 runtime=io.containerd.runc.v2 Feb 9 09:49:35.396581 env[1550]: time="2024-02-09T09:49:35.396553311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddd66c57-2zj4d,Uid:09b20649-bbc0-45d1-af93-aab9a21df100,Namespace:calico-system,Attempt:1,} returns sandbox id \"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3\"" Feb 9 09:49:35.397378 env[1550]: time="2024-02-09T09:49:35.397363153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 09:49:35.430405 kubelet[2000]: E0209 09:49:35.430360 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:36.249951 systemd[1]: Started sshd@7-139.178.94.23:22-146.190.237.14:44114.service. Feb 9 09:49:36.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.94.23:22-146.190.237.14:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:36.276496 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 9 09:49:36.276560 kernel: audit: type=1130 audit(1707472176.248:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.94.23:22-146.190.237.14:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:36.430833 kubelet[2000]: E0209 09:49:36.430791 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:36.567994 systemd-networkd[1407]: cali22b4c084c44: Gained IPv6LL Feb 9 09:49:36.952839 env[1550]: time="2024-02-09T09:49:36.952521736Z" level=info msg="StopPodSandbox for \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\"" Feb 9 09:49:36.954964 sshd[3520]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=146.190.237.14 user=root Feb 9 09:49:36.953000 audit[3520]: USER_AUTH pid=3520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=146.190.237.14 addr=146.190.237.14 terminal=ssh res=failed' Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.003 [INFO][3576] k8s.go 578: Cleaning up netns ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.003 [INFO][3576] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" iface="eth0" netns="/var/run/netns/cni-f1b17618-2671-b324-5300-52166bcac5f4" Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.003 [INFO][3576] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" iface="eth0" netns="/var/run/netns/cni-f1b17618-2671-b324-5300-52166bcac5f4" Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.004 [INFO][3576] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" iface="eth0" netns="/var/run/netns/cni-f1b17618-2671-b324-5300-52166bcac5f4" Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.004 [INFO][3576] k8s.go 585: Releasing IP address(es) ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.004 [INFO][3576] utils.go 188: Calico CNI releasing IP address ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.014 [INFO][3590] ipam_plugin.go 415: Releasing address using handleID ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" HandleID="k8s-pod-network.34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.014 [INFO][3590] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.014 [INFO][3590] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.019 [WARNING][3590] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" HandleID="k8s-pod-network.34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.019 [INFO][3590] ipam_plugin.go 443: Releasing address using workloadID ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" HandleID="k8s-pod-network.34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.020 [INFO][3590] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:49:37.022238 env[1550]: 2024-02-09 09:49:37.021 [INFO][3576] k8s.go 591: Teardown processing complete. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:49:37.024115 systemd[1]: run-netns-cni\x2df1b17618\x2d2671\x2db324\x2d5300\x2d52166bcac5f4.mount: Deactivated successfully. Feb 9 09:49:37.043115 env[1550]: time="2024-02-09T09:49:37.043065006Z" level=info msg="TearDown network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\" successfully" Feb 9 09:49:37.043115 env[1550]: time="2024-02-09T09:49:37.043087687Z" level=info msg="StopPodSandbox for \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\" returns successfully" Feb 9 09:49:37.043442 env[1550]: time="2024-02-09T09:49:37.043400489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jwb92,Uid:627c20d1-39af-46cf-8f3a-2d45fb6f84bb,Namespace:kube-system,Attempt:1,}" Feb 9 09:49:37.043487 kernel: audit: type=1100 audit(1707472176.953:249): pid=3520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=146.190.237.14 addr=146.190.237.14 terminal=ssh res=failed' Feb 9 09:49:37.139181 systemd-networkd[1407]: cali29e143e6bfb: Link UP Feb 9 09:49:37.199211 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:49:37.199245 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali29e143e6bfb: link becomes ready Feb 9 09:49:37.199320 systemd-networkd[1407]: cali29e143e6bfb: Gained carrier Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.056 [INFO][3605] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.067 [INFO][3605] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0 coredns-787d4945fb- kube-system 627c20d1-39af-46cf-8f3a-2d45fb6f84bb 1565 0 2024-02-09 09:41:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.67.80.11 coredns-787d4945fb-jwb92 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29e143e6bfb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Namespace="kube-system" Pod="coredns-787d4945fb-jwb92" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--jwb92-" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.067 [INFO][3605] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Namespace="kube-system" Pod="coredns-787d4945fb-jwb92" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.083 [INFO][3625] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" HandleID="k8s-pod-network.392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.095 [INFO][3625] ipam_plugin.go 268: Auto assigning IP ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" HandleID="k8s-pod-network.392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b48b0), Attrs:map[string]string{"namespace":"kube-system", "node":"10.67.80.11", "pod":"coredns-787d4945fb-jwb92", "timestamp":"2024-02-09 09:49:37.08340227 +0000 UTC"}, Hostname:"10.67.80.11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.095 [INFO][3625] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.095 [INFO][3625] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.095 [INFO][3625] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.11' Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.097 [INFO][3625] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" host="10.67.80.11" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.102 [INFO][3625] ipam.go 372: Looking up existing affinities for host host="10.67.80.11" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.106 [INFO][3625] ipam.go 489: Trying affinity for 192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.110 [INFO][3625] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.115 [INFO][3625] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.115 [INFO][3625] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" host="10.67.80.11" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.117 [INFO][3625] ipam.go 1682: Creating new handle: k8s-pod-network.392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274 Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.123 [INFO][3625] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" host="10.67.80.11" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.131 [INFO][3625] ipam.go 1216: Successfully claimed IPs: [192.168.5.66/26] block=192.168.5.64/26 handle="k8s-pod-network.392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" host="10.67.80.11" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.131 [INFO][3625] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.66/26] handle="k8s-pod-network.392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" host="10.67.80.11" Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.132 [INFO][3625] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:49:37.215165 env[1550]: 2024-02-09 09:49:37.132 [INFO][3625] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.5.66/26] IPv6=[] ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" HandleID="k8s-pod-network.392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.216731 env[1550]: 2024-02-09 09:49:37.135 [INFO][3605] k8s.go 385: Populated endpoint ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Namespace="kube-system" Pod="coredns-787d4945fb-jwb92" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"627c20d1-39af-46cf-8f3a-2d45fb6f84bb", ResourceVersion:"1565", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"", Pod:"coredns-787d4945fb-jwb92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e143e6bfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:49:37.216731 env[1550]: 2024-02-09 09:49:37.136 [INFO][3605] k8s.go 386: Calico CNI using IPs: [192.168.5.66/32] ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Namespace="kube-system" Pod="coredns-787d4945fb-jwb92" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.216731 env[1550]: 2024-02-09 09:49:37.136 [INFO][3605] dataplane_linux.go 68: Setting the host side veth name to cali29e143e6bfb ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Namespace="kube-system" Pod="coredns-787d4945fb-jwb92" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.216731 env[1550]: 2024-02-09 09:49:37.199 [INFO][3605] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Namespace="kube-system" Pod="coredns-787d4945fb-jwb92" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.216731 env[1550]: 2024-02-09 09:49:37.199 [INFO][3605] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Namespace="kube-system" Pod="coredns-787d4945fb-jwb92" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"627c20d1-39af-46cf-8f3a-2d45fb6f84bb", ResourceVersion:"1565", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274", Pod:"coredns-787d4945fb-jwb92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e143e6bfb", MAC:"96:94:32:57:aa:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:49:37.216731 env[1550]: 2024-02-09 09:49:37.213 [INFO][3605] k8s.go 491: Wrote updated endpoint to datastore ContainerID="392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274" Namespace="kube-system" Pod="coredns-787d4945fb-jwb92" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:49:37.229269 env[1550]: time="2024-02-09T09:49:37.229157533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:49:37.229269 env[1550]: time="2024-02-09T09:49:37.229211437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:49:37.229269 env[1550]: time="2024-02-09T09:49:37.229231122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:49:37.229501 env[1550]: time="2024-02-09T09:49:37.229417288Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274 pid=3659 runtime=io.containerd.runc.v2 Feb 9 09:49:37.288265 env[1550]: time="2024-02-09T09:49:37.288208625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jwb92,Uid:627c20d1-39af-46cf-8f3a-2d45fb6f84bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274\"" Feb 9 09:49:37.431456 kubelet[2000]: E0209 09:49:37.431351 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:38.359766 systemd-networkd[1407]: cali29e143e6bfb: Gained IPv6LL Feb 9 09:49:38.432007 kubelet[2000]: E0209 09:49:38.431899 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:38.784833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4024314227.mount: Deactivated successfully. Feb 9 09:49:38.952630 env[1550]: time="2024-02-09T09:49:38.952508328Z" level=info msg="StopPodSandbox for \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\"" Feb 9 09:49:38.952630 env[1550]: time="2024-02-09T09:49:38.952526144Z" level=info msg="StopPodSandbox for \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\"" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.032 [INFO][3806] k8s.go 578: Cleaning up netns ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.032 [INFO][3806] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" iface="eth0" netns="/var/run/netns/cni-02248749-feb0-5130-f9b8-59eab6787351" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.033 [INFO][3806] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" iface="eth0" netns="/var/run/netns/cni-02248749-feb0-5130-f9b8-59eab6787351" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.033 [INFO][3806] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" iface="eth0" netns="/var/run/netns/cni-02248749-feb0-5130-f9b8-59eab6787351" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.033 [INFO][3806] k8s.go 585: Releasing IP address(es) ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.033 [INFO][3806] utils.go 188: Calico CNI releasing IP address ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.070 [INFO][3841] ipam_plugin.go 415: Releasing address using handleID ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" HandleID="k8s-pod-network.2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.070 [INFO][3841] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.070 [INFO][3841] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.086 [WARNING][3841] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" HandleID="k8s-pod-network.2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.086 [INFO][3841] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" HandleID="k8s-pod-network.2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.089 [INFO][3841] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:49:39.092462 env[1550]: 2024-02-09 09:49:39.090 [INFO][3806] k8s.go 591: Teardown processing complete. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:49:39.093449 env[1550]: time="2024-02-09T09:49:39.092578976Z" level=info msg="TearDown network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\" successfully" Feb 9 09:49:39.093449 env[1550]: time="2024-02-09T09:49:39.092633667Z" level=info msg="StopPodSandbox for \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\" returns successfully" Feb 9 09:49:39.093633 env[1550]: time="2024-02-09T09:49:39.093554151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5z52,Uid:94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8,Namespace:calico-system,Attempt:1,}" Feb 9 09:49:39.096653 systemd[1]: run-netns-cni\x2d02248749\x2dfeb0\x2d5130\x2df9b8\x2d59eab6787351.mount: Deactivated successfully. Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.032 [INFO][3807] k8s.go 578: Cleaning up netns ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.033 [INFO][3807] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" iface="eth0" netns="/var/run/netns/cni-63b18d37-61b3-f908-5559-2cb0834e90a7" Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.033 [INFO][3807] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" iface="eth0" netns="/var/run/netns/cni-63b18d37-61b3-f908-5559-2cb0834e90a7" Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.033 [INFO][3807] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" iface="eth0" netns="/var/run/netns/cni-63b18d37-61b3-f908-5559-2cb0834e90a7" Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.034 [INFO][3807] k8s.go 585: Releasing IP address(es) ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.034 [INFO][3807] utils.go 188: Calico CNI releasing IP address ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.070 [INFO][3842] ipam_plugin.go 415: Releasing address using handleID ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" HandleID="k8s-pod-network.e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.070 [INFO][3842] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.089 [INFO][3842] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.104 [WARNING][3842] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" HandleID="k8s-pod-network.e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.105 [INFO][3842] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" HandleID="k8s-pod-network.e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.109 [INFO][3842] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:49:39.113324 env[1550]: 2024-02-09 09:49:39.111 [INFO][3807] k8s.go 591: Teardown processing complete. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:49:39.114396 env[1550]: time="2024-02-09T09:49:39.113568261Z" level=info msg="TearDown network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\" successfully" Feb 9 09:49:39.114396 env[1550]: time="2024-02-09T09:49:39.113637584Z" level=info msg="StopPodSandbox for \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\" returns successfully" Feb 9 09:49:39.114662 env[1550]: time="2024-02-09T09:49:39.114572480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-np7dd,Uid:34d54f83-8d39-40c0-9378-be0277a74132,Namespace:kube-system,Attempt:1,}" Feb 9 09:49:39.238661 sshd[3520]: Failed password for root from 146.190.237.14 port 44114 ssh2 Feb 9 09:49:39.276861 systemd-networkd[1407]: cali326eda024d9: Link UP Feb 9 09:49:39.342553 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:49:39.342595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali326eda024d9: link becomes ready Feb 9 09:49:39.342604 systemd-networkd[1407]: cali326eda024d9: Gained carrier Feb 9 09:49:39.343106 systemd-networkd[1407]: cali962bfa4cca0: Link UP Feb 9 09:49:39.343488 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali962bfa4cca0: link becomes ready Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.140 [INFO][3874] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.165 [INFO][3874] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.11-k8s-csi--node--driver--s5z52-eth0 csi-node-driver- calico-system 94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8 1576 0 2024-02-09 09:48:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.67.80.11 csi-node-driver-s5z52 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali326eda024d9 [] []}} ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Namespace="calico-system" Pod="csi-node-driver-s5z52" WorkloadEndpoint="10.67.80.11-k8s-csi--node--driver--s5z52-" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.165 [INFO][3874] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Namespace="calico-system" Pod="csi-node-driver-s5z52" WorkloadEndpoint="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.199 [INFO][3922] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" HandleID="k8s-pod-network.c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.220 [INFO][3922] ipam_plugin.go 268: Auto assigning IP ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" HandleID="k8s-pod-network.c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028fa80), Attrs:map[string]string{"namespace":"calico-system", "node":"10.67.80.11", "pod":"csi-node-driver-s5z52", "timestamp":"2024-02-09 09:49:39.199677413 +0000 UTC"}, Hostname:"10.67.80.11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.220 [INFO][3922] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.220 [INFO][3922] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.220 [INFO][3922] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.11' Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.223 [INFO][3922] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" host="10.67.80.11" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.230 [INFO][3922] ipam.go 372: Looking up existing affinities for host host="10.67.80.11" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.237 [INFO][3922] ipam.go 489: Trying affinity for 192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.241 [INFO][3922] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.247 [INFO][3922] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.247 [INFO][3922] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" host="10.67.80.11" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.251 [INFO][3922] ipam.go 1682: Creating new handle: k8s-pod-network.c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.259 [INFO][3922] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" host="10.67.80.11" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.270 [INFO][3922] ipam.go 1216: Successfully claimed IPs: [192.168.5.67/26] block=192.168.5.64/26 handle="k8s-pod-network.c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" host="10.67.80.11" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.270 [INFO][3922] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.67/26] handle="k8s-pod-network.c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" host="10.67.80.11" Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.270 [INFO][3922] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:49:39.358462 env[1550]: 2024-02-09 09:49:39.270 [INFO][3922] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.5.67/26] IPv6=[] ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" HandleID="k8s-pod-network.c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.358901 env[1550]: 2024-02-09 09:49:39.273 [INFO][3874] k8s.go 385: Populated endpoint ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Namespace="calico-system" Pod="csi-node-driver-s5z52" WorkloadEndpoint="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-csi--node--driver--s5z52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8", ResourceVersion:"1576", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"", Pod:"csi-node-driver-s5z52", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali326eda024d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:49:39.358901 env[1550]: 2024-02-09 09:49:39.274 [INFO][3874] k8s.go 386: Calico CNI using IPs: [192.168.5.67/32] ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Namespace="calico-system" Pod="csi-node-driver-s5z52" WorkloadEndpoint="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.358901 env[1550]: 2024-02-09 09:49:39.274 [INFO][3874] dataplane_linux.go 68: Setting the host side veth name to cali326eda024d9 ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Namespace="calico-system" Pod="csi-node-driver-s5z52" WorkloadEndpoint="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.358901 env[1550]: 2024-02-09 09:49:39.342 [INFO][3874] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Namespace="calico-system" Pod="csi-node-driver-s5z52" WorkloadEndpoint="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.358901 env[1550]: 2024-02-09 09:49:39.342 [INFO][3874] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Namespace="calico-system" Pod="csi-node-driver-s5z52" WorkloadEndpoint="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-csi--node--driver--s5z52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8", ResourceVersion:"1576", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e", Pod:"csi-node-driver-s5z52", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali326eda024d9", MAC:"f2:62:81:c4:ab:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:49:39.358901 env[1550]: 2024-02-09 09:49:39.357 [INFO][3874] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e" Namespace="calico-system" Pod="csi-node-driver-s5z52" WorkloadEndpoint="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:49:39.363871 env[1550]: time="2024-02-09T09:49:39.363840196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:49:39.363871 env[1550]: time="2024-02-09T09:49:39.363862936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:49:39.363871 env[1550]: time="2024-02-09T09:49:39.363869594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:49:39.364033 env[1550]: time="2024-02-09T09:49:39.363973811Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e pid=3976 runtime=io.containerd.runc.v2 Feb 9 09:49:39.369421 systemd-networkd[1407]: cali962bfa4cca0: Gained carrier Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.153 [INFO][3891] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.177 [INFO][3891] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0 coredns-787d4945fb- kube-system 34d54f83-8d39-40c0-9378-be0277a74132 1577 0 2024-02-09 09:41:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.67.80.11 coredns-787d4945fb-np7dd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali962bfa4cca0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Namespace="kube-system" Pod="coredns-787d4945fb-np7dd" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--np7dd-" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.178 [INFO][3891] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Namespace="kube-system" Pod="coredns-787d4945fb-np7dd" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.207 [INFO][3934] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" HandleID="k8s-pod-network.c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.229 [INFO][3934] ipam_plugin.go 268: Auto assigning IP ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" HandleID="k8s-pod-network.c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125250), Attrs:map[string]string{"namespace":"kube-system", "node":"10.67.80.11", "pod":"coredns-787d4945fb-np7dd", "timestamp":"2024-02-09 09:49:39.207496915 +0000 UTC"}, Hostname:"10.67.80.11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.229 [INFO][3934] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.270 [INFO][3934] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.270 [INFO][3934] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.11' Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.274 [INFO][3934] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" host="10.67.80.11" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.283 [INFO][3934] ipam.go 372: Looking up existing affinities for host host="10.67.80.11" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.292 [INFO][3934] ipam.go 489: Trying affinity for 192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.296 [INFO][3934] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.301 [INFO][3934] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.301 [INFO][3934] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" host="10.67.80.11" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.305 [INFO][3934] ipam.go 1682: Creating new handle: k8s-pod-network.c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512 Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.312 [INFO][3934] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" host="10.67.80.11" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.321 [INFO][3934] ipam.go 1216: Successfully claimed IPs: [192.168.5.68/26] block=192.168.5.64/26 handle="k8s-pod-network.c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" host="10.67.80.11" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.321 [INFO][3934] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.68/26] handle="k8s-pod-network.c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" host="10.67.80.11" Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.321 [INFO][3934] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:49:39.384274 env[1550]: 2024-02-09 09:49:39.321 [INFO][3934] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.5.68/26] IPv6=[] ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" HandleID="k8s-pod-network.c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.384768 env[1550]: 2024-02-09 09:49:39.323 [INFO][3891] k8s.go 385: Populated endpoint ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Namespace="kube-system" Pod="coredns-787d4945fb-np7dd" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"34d54f83-8d39-40c0-9378-be0277a74132", ResourceVersion:"1577", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"", Pod:"coredns-787d4945fb-np7dd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali962bfa4cca0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:49:39.384768 env[1550]: 2024-02-09 09:49:39.323 [INFO][3891] k8s.go 386: Calico CNI using IPs: [192.168.5.68/32] ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Namespace="kube-system" Pod="coredns-787d4945fb-np7dd" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.384768 env[1550]: 2024-02-09 09:49:39.323 [INFO][3891] dataplane_linux.go 68: Setting the host side veth name to cali962bfa4cca0 ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Namespace="kube-system" Pod="coredns-787d4945fb-np7dd" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.384768 env[1550]: 2024-02-09 09:49:39.369 [INFO][3891] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Namespace="kube-system" Pod="coredns-787d4945fb-np7dd" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.384768 env[1550]: 2024-02-09 09:49:39.369 [INFO][3891] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Namespace="kube-system" Pod="coredns-787d4945fb-np7dd" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"34d54f83-8d39-40c0-9378-be0277a74132", ResourceVersion:"1577", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512", Pod:"coredns-787d4945fb-np7dd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali962bfa4cca0", MAC:"92:6f:60:10:61:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:49:39.384768 env[1550]: 2024-02-09 09:49:39.383 [INFO][3891] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512" Namespace="kube-system" Pod="coredns-787d4945fb-np7dd" WorkloadEndpoint="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:49:39.389642 env[1550]: time="2024-02-09T09:49:39.389611420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:49:39.389642 env[1550]: time="2024-02-09T09:49:39.389632990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:49:39.389642 env[1550]: time="2024-02-09T09:49:39.389640127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:49:39.389769 env[1550]: time="2024-02-09T09:49:39.389726385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512 pid=4019 runtime=io.containerd.runc.v2 Feb 9 09:49:39.405320 env[1550]: time="2024-02-09T09:49:39.405274668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5z52,Uid:94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8,Namespace:calico-system,Attempt:1,} returns sandbox id \"c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e\"" Feb 9 09:49:39.433010 kubelet[2000]: E0209 09:49:39.432913 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:39.436806 env[1550]: time="2024-02-09T09:49:39.436726915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-np7dd,Uid:34d54f83-8d39-40c0-9378-be0277a74132,Namespace:kube-system,Attempt:1,} returns sandbox id \"c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512\"" Feb 9 09:49:39.791050 systemd[1]: run-netns-cni\x2d63b18d37\x2d61b3\x2df908\x2d5559\x2d2cb0834e90a7.mount: Deactivated successfully. Feb 9 09:49:40.236809 sshd[3520]: Connection closed by authenticating user root 146.190.237.14 port 44114 [preauth] Feb 9 09:49:40.237442 systemd[1]: sshd@7-139.178.94.23:22-146.190.237.14:44114.service: Deactivated successfully. Feb 9 09:49:40.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.94.23:22-146.190.237.14:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:40.320679 kernel: audit: type=1131 audit(1707472180.236:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.94.23:22-146.190.237.14:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:49:40.433215 kubelet[2000]: E0209 09:49:40.433139 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:40.600560 systemd-networkd[1407]: cali962bfa4cca0: Gained IPv6LL Feb 9 09:49:40.983808 systemd-networkd[1407]: cali326eda024d9: Gained IPv6LL Feb 9 09:49:41.433856 kubelet[2000]: E0209 09:49:41.433789 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:42.434937 kubelet[2000]: E0209 09:49:42.434875 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:43.435410 kubelet[2000]: E0209 09:49:43.435300 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:44.379065 kubelet[2000]: E0209 09:49:44.378953 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:44.436647 kubelet[2000]: E0209 09:49:44.436529 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:45.437634 kubelet[2000]: E0209 09:49:45.437588 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:45.516589 env[1550]: time="2024-02-09T09:49:45.516543137Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:45.517287 env[1550]: time="2024-02-09T09:49:45.517245336Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:45.518363 env[1550]: time="2024-02-09T09:49:45.518349947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:45.519952 env[1550]: time="2024-02-09T09:49:45.519924224Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:45.520271 env[1550]: time="2024-02-09T09:49:45.520255628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 9 09:49:45.520815 env[1550]: time="2024-02-09T09:49:45.520800238Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:49:45.524287 env[1550]: time="2024-02-09T09:49:45.524272854Z" level=info msg="CreateContainer within sandbox \"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 09:49:45.527850 env[1550]: time="2024-02-09T09:49:45.527808189Z" level=info msg="CreateContainer within sandbox \"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf\"" Feb 9 09:49:45.528004 env[1550]: time="2024-02-09T09:49:45.527991309Z" level=info msg="StartContainer for \"f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf\"" Feb 9 09:49:45.584691 env[1550]: time="2024-02-09T09:49:45.584661708Z" level=info msg="StartContainer for \"f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf\" returns successfully" Feb 9 09:49:46.199803 kubelet[2000]: I0209 09:49:46.199739 2000 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" podStartSLOduration=-9.223371554655087e+09 pod.CreationTimestamp="2024-02-09 09:41:44 +0000 UTC" firstStartedPulling="2024-02-09 09:49:35.397183177 +0000 UTC m=+71.256419460" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:49:46.199501402 +0000 UTC m=+82.058737717" watchObservedRunningTime="2024-02-09 09:49:46.199689734 +0000 UTC m=+82.058926049" Feb 9 09:49:46.437867 kubelet[2000]: E0209 09:49:46.437818 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:46.529841 env[1550]: time="2024-02-09T09:49:46.529801283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:46.530440 env[1550]: time="2024-02-09T09:49:46.530400091Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:46.531206 env[1550]: time="2024-02-09T09:49:46.531159553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:46.532349 env[1550]: time="2024-02-09T09:49:46.532315480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:46.532630 env[1550]: time="2024-02-09T09:49:46.532588194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 09:49:46.532944 env[1550]: time="2024-02-09T09:49:46.532901358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 09:49:46.533715 env[1550]: time="2024-02-09T09:49:46.533678848Z" level=info msg="CreateContainer within sandbox \"392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:49:46.538152 env[1550]: time="2024-02-09T09:49:46.538116346Z" level=info msg="CreateContainer within sandbox \"392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a23cd0d22da484754557db0051f8177f51ad3adbe11597d8cb40e34b627c6a6\"" Feb 9 09:49:46.538431 env[1550]: time="2024-02-09T09:49:46.538375557Z" level=info msg="StartContainer for \"7a23cd0d22da484754557db0051f8177f51ad3adbe11597d8cb40e34b627c6a6\"" Feb 9 09:49:46.583334 env[1550]: time="2024-02-09T09:49:46.583279531Z" level=info msg="StartContainer for \"7a23cd0d22da484754557db0051f8177f51ad3adbe11597d8cb40e34b627c6a6\" returns successfully" Feb 9 09:49:47.215344 kubelet[2000]: I0209 09:49:47.215315 2000 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-jwb92" podStartSLOduration=-9.2233715476395e+09 pod.CreationTimestamp="2024-02-09 09:41:38 +0000 UTC" firstStartedPulling="2024-02-09 09:49:37.288789049 +0000 UTC m=+73.148025335" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:49:47.215141346 +0000 UTC m=+83.074377652" watchObservedRunningTime="2024-02-09 09:49:47.215277054 +0000 UTC m=+83.074513351" Feb 9 09:49:47.238000 audit[4528]: NETFILTER_CFG table=filter:79 family=2 entries=14 op=nft_register_rule pid=4528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:47.238000 audit[4528]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe01ad5010 a2=0 a3=7ffe01ad4ffc items=0 ppid=2277 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:47.382983 kernel: audit: type=1325 audit(1707472187.238:251): table=filter:79 family=2 entries=14 op=nft_register_rule pid=4528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:47.383024 kernel: audit: type=1300 audit(1707472187.238:251): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe01ad5010 a2=0 a3=7ffe01ad4ffc items=0 ppid=2277 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:47.383040 kernel: audit: type=1327 audit(1707472187.238:251): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:47.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:47.438514 kubelet[2000]: E0209 09:49:47.438503 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:47.239000 audit[4528]: NETFILTER_CFG table=nat:80 family=2 entries=20 op=nft_register_rule pid=4528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:47.239000 audit[4528]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe01ad5010 a2=0 a3=7ffe01ad4ffc items=0 ppid=2277 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:47.585705 kernel: audit: type=1325 audit(1707472187.239:252): table=nat:80 family=2 entries=20 op=nft_register_rule pid=4528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:47.585735 kernel: audit: type=1300 audit(1707472187.239:252): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe01ad5010 a2=0 a3=7ffe01ad4ffc items=0 ppid=2277 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:47.585752 kernel: audit: type=1327 audit(1707472187.239:252): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:47.239000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:48.438675 kubelet[2000]: E0209 09:49:48.438605 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:49.439774 kubelet[2000]: E0209 09:49:49.439697 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:50.440275 kubelet[2000]: E0209 09:49:50.440172 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:50.999436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182462202.mount: Deactivated successfully. Feb 9 09:49:51.441289 kubelet[2000]: E0209 09:49:51.441097 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:52.442026 kubelet[2000]: E0209 09:49:52.441947 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:53.443025 kubelet[2000]: E0209 09:49:53.442974 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:53.515087 env[1550]: time="2024-02-09T09:49:53.515036987Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:53.515611 env[1550]: time="2024-02-09T09:49:53.515572117Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:53.516300 env[1550]: time="2024-02-09T09:49:53.516256904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:53.517321 env[1550]: time="2024-02-09T09:49:53.517273580Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:53.517646 env[1550]: time="2024-02-09T09:49:53.517603319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 09:49:53.518268 env[1550]: time="2024-02-09T09:49:53.518180421Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:49:53.518897 env[1550]: time="2024-02-09T09:49:53.518869922Z" level=info msg="CreateContainer within sandbox \"c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 09:49:53.524249 env[1550]: time="2024-02-09T09:49:53.524203213Z" level=info msg="CreateContainer within sandbox \"c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"55e61e3cb3e70749ae3b5c82db7249bff42e86b7e7cf1cb7a14e5166c807cbc2\"" Feb 9 09:49:53.524564 env[1550]: time="2024-02-09T09:49:53.524495102Z" level=info msg="StartContainer for \"55e61e3cb3e70749ae3b5c82db7249bff42e86b7e7cf1cb7a14e5166c807cbc2\"" Feb 9 09:49:53.573082 env[1550]: time="2024-02-09T09:49:53.573027569Z" level=info msg="StartContainer for \"55e61e3cb3e70749ae3b5c82db7249bff42e86b7e7cf1cb7a14e5166c807cbc2\" returns successfully" Feb 9 09:49:53.685943 env[1550]: time="2024-02-09T09:49:53.685805312Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:53.688653 env[1550]: time="2024-02-09T09:49:53.688544366Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:53.692633 env[1550]: time="2024-02-09T09:49:53.692526826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:53.696809 env[1550]: time="2024-02-09T09:49:53.696603846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:53.698568 env[1550]: time="2024-02-09T09:49:53.698468933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 09:49:53.699696 env[1550]: time="2024-02-09T09:49:53.699594972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 09:49:53.702563 env[1550]: time="2024-02-09T09:49:53.702441056Z" level=info msg="CreateContainer within sandbox \"c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:49:53.716849 env[1550]: time="2024-02-09T09:49:53.716805675Z" level=info msg="CreateContainer within sandbox \"c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11d835a42c57a5936392e2dcf76f6fb47648797fbfbf5c4cec86d9e2802ae079\"" Feb 9 09:49:53.717057 env[1550]: time="2024-02-09T09:49:53.717016994Z" level=info msg="StartContainer for \"11d835a42c57a5936392e2dcf76f6fb47648797fbfbf5c4cec86d9e2802ae079\"" Feb 9 09:49:53.764056 env[1550]: time="2024-02-09T09:49:53.763995794Z" level=info msg="StartContainer for \"11d835a42c57a5936392e2dcf76f6fb47648797fbfbf5c4cec86d9e2802ae079\" returns successfully" Feb 9 09:49:54.233804 kubelet[2000]: I0209 09:49:54.233730 2000 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-np7dd" podStartSLOduration=-9.223371540621136e+09 pod.CreationTimestamp="2024-02-09 09:41:38 +0000 UTC" firstStartedPulling="2024-02-09 09:49:39.438907433 +0000 UTC m=+75.298143774" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:49:54.233180958 +0000 UTC m=+90.092417314" watchObservedRunningTime="2024-02-09 09:49:54.233639776 +0000 UTC m=+90.092876111" Feb 9 09:49:54.324000 audit[4899]: NETFILTER_CFG table=filter:81 family=2 entries=14 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:54.324000 audit[4899]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe08fed020 a2=0 a3=7ffe08fed00c items=0 ppid=2277 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:54.444062 kubelet[2000]: E0209 09:49:54.444023 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:54.480124 kernel: audit: type=1325 audit(1707472194.324:253): table=filter:81 family=2 entries=14 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:54.480204 kernel: audit: type=1300 audit(1707472194.324:253): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe08fed020 a2=0 a3=7ffe08fed00c items=0 ppid=2277 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:54.480225 kernel: audit: type=1327 audit(1707472194.324:253): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:54.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:54.534000 audit[4899]: NETFILTER_CFG table=nat:82 family=2 entries=20 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:54.534000 audit[4899]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe08fed020 a2=0 a3=7ffe08fed00c items=0 ppid=2277 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:54.681388 kernel: audit: type=1325 audit(1707472194.534:254): table=nat:82 family=2 entries=20 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:49:54.681417 kernel: audit: type=1300 audit(1707472194.534:254): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe08fed020 a2=0 a3=7ffe08fed00c items=0 ppid=2277 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:49:54.681436 kernel: audit: type=1327 audit(1707472194.534:254): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:54.534000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:49:55.445048 kubelet[2000]: E0209 09:49:55.444973 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:56.445874 kubelet[2000]: E0209 09:49:56.445814 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:57.446202 kubelet[2000]: E0209 09:49:57.446086 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:58.447019 kubelet[2000]: E0209 09:49:58.446956 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:59.093783 env[1550]: time="2024-02-09T09:49:59.093716073Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:59.094385 env[1550]: time="2024-02-09T09:49:59.094344046Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:59.095125 env[1550]: time="2024-02-09T09:49:59.095068676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:59.096035 env[1550]: time="2024-02-09T09:49:59.095994930Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:49:59.096236 env[1550]: time="2024-02-09T09:49:59.096195658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 09:49:59.097307 env[1550]: time="2024-02-09T09:49:59.097237423Z" level=info msg="CreateContainer within sandbox \"c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 09:49:59.102020 env[1550]: time="2024-02-09T09:49:59.101978212Z" level=info msg="CreateContainer within sandbox \"c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a56f5171fbae3f457389b51481a314550195970c27d773c20dd3697041715657\"" Feb 9 09:49:59.102180 env[1550]: time="2024-02-09T09:49:59.102139076Z" level=info msg="StartContainer for \"a56f5171fbae3f457389b51481a314550195970c27d773c20dd3697041715657\"" Feb 9 09:49:59.153401 env[1550]: time="2024-02-09T09:49:59.153349464Z" level=info msg="StartContainer for \"a56f5171fbae3f457389b51481a314550195970c27d773c20dd3697041715657\" returns successfully" Feb 9 09:49:59.261623 kubelet[2000]: I0209 09:49:59.261579 2000 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-s5z52" podStartSLOduration=-9.22337195459322e+09 pod.CreationTimestamp="2024-02-09 09:48:37 +0000 UTC" firstStartedPulling="2024-02-09 09:49:39.405837987 +0000 UTC m=+75.265074269" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:49:59.261074354 +0000 UTC m=+95.120310639" watchObservedRunningTime="2024-02-09 09:49:59.261556644 +0000 UTC m=+95.120792926" Feb 9 09:49:59.448182 kubelet[2000]: E0209 09:49:59.448002 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:59.702575 kubelet[2000]: I0209 09:49:59.702488 2000 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 09:49:59.702575 kubelet[2000]: I0209 09:49:59.702529 2000 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 09:50:00.448707 kubelet[2000]: E0209 09:50:00.448621 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:01.449835 kubelet[2000]: E0209 09:50:01.449722 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:02.450596 kubelet[2000]: E0209 09:50:02.450465 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:03.451001 kubelet[2000]: E0209 09:50:03.450938 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:04.378729 kubelet[2000]: E0209 09:50:04.378606 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:04.451715 kubelet[2000]: E0209 09:50:04.451665 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:05.452009 kubelet[2000]: E0209 09:50:05.451895 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:06.452196 kubelet[2000]: E0209 09:50:06.452078 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:07.454153 kubelet[2000]: E0209 09:50:07.452932 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:08.453566 kubelet[2000]: E0209 09:50:08.453442 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:09.454016 kubelet[2000]: E0209 09:50:09.453944 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:10.454382 kubelet[2000]: E0209 09:50:10.454313 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:11.454499 kubelet[2000]: E0209 09:50:11.454419 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:12.455456 kubelet[2000]: E0209 09:50:12.455354 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:13.455732 kubelet[2000]: E0209 09:50:13.455620 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:14.456396 kubelet[2000]: E0209 09:50:14.456289 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:15.457054 kubelet[2000]: E0209 09:50:15.456977 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:16.457265 kubelet[2000]: E0209 09:50:16.457130 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:17.425176 systemd[1]: Started sshd@8-139.178.94.23:22-218.92.0.22:64396.service. Feb 9 09:50:17.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.94.23:22-218.92.0.22:64396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:17.457841 kubelet[2000]: E0209 09:50:17.457801 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:17.511579 kernel: audit: type=1130 audit(1707472217.423:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.94.23:22-218.92.0.22:64396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:18.458895 kubelet[2000]: E0209 09:50:18.458768 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:18.913023 sshd[5865]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 09:50:18.911000 audit[5865]: USER_AUTH pid=5865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:19.000682 kernel: audit: type=1100 audit(1707472218.911:256): pid=5865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:19.459100 kubelet[2000]: E0209 09:50:19.458991 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:20.459728 kubelet[2000]: E0209 09:50:20.459608 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:21.297510 sshd[5865]: Failed password for root from 218.92.0.22 port 64396 ssh2 Feb 9 09:50:21.460809 kubelet[2000]: E0209 09:50:21.460703 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:22.222000 audit[5865]: USER_AUTH pid=5865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:22.310530 kernel: audit: type=1100 audit(1707472222.222:257): pid=5865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:22.461909 kubelet[2000]: E0209 09:50:22.461790 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:23.462530 kubelet[2000]: E0209 09:50:23.462417 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:24.156994 sshd[5865]: Failed password for root from 218.92.0.22 port 64396 ssh2 Feb 9 09:50:24.379825 kubelet[2000]: E0209 09:50:24.379715 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:24.388048 env[1550]: time="2024-02-09T09:50:24.387923116Z" level=info msg="StopPodSandbox for \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\"" Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.433 [WARNING][6143] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"34d54f83-8d39-40c0-9378-be0277a74132", ResourceVersion:"1641", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512", Pod:"coredns-787d4945fb-np7dd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali962bfa4cca0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.433 [INFO][6143] k8s.go 578: Cleaning up netns ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.433 [INFO][6143] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" iface="eth0" netns="" Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.433 [INFO][6143] k8s.go 585: Releasing IP address(es) ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.433 [INFO][6143] utils.go 188: Calico CNI releasing IP address ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.444 [INFO][6156] ipam_plugin.go 415: Releasing address using handleID ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" HandleID="k8s-pod-network.e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.444 [INFO][6156] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.444 [INFO][6156] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.454 [WARNING][6156] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" HandleID="k8s-pod-network.e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.454 [INFO][6156] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" HandleID="k8s-pod-network.e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.456 [INFO][6156] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:50:24.458580 env[1550]: 2024-02-09 09:50:24.457 [INFO][6143] k8s.go 591: Teardown processing complete. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:50:24.458580 env[1550]: time="2024-02-09T09:50:24.458557928Z" level=info msg="TearDown network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\" successfully" Feb 9 09:50:24.459436 env[1550]: time="2024-02-09T09:50:24.458586164Z" level=info msg="StopPodSandbox for \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\" returns successfully" Feb 9 09:50:24.459436 env[1550]: time="2024-02-09T09:50:24.459040456Z" level=info msg="RemovePodSandbox for \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\"" Feb 9 09:50:24.459436 env[1550]: time="2024-02-09T09:50:24.459086848Z" level=info msg="Forcibly stopping sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\"" Feb 9 09:50:24.463321 kubelet[2000]: E0209 09:50:24.463299 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.494 [WARNING][6184] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"34d54f83-8d39-40c0-9378-be0277a74132", ResourceVersion:"1641", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"c927706ea79936a1d4df815310210c11695f0314bb9d9540a97c35fcde116512", Pod:"coredns-787d4945fb-np7dd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali962bfa4cca0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.494 [INFO][6184] k8s.go 578: Cleaning up netns ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.494 [INFO][6184] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" iface="eth0" netns="" Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.494 [INFO][6184] k8s.go 585: Releasing IP address(es) ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.494 [INFO][6184] utils.go 188: Calico CNI releasing IP address ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.505 [INFO][6199] ipam_plugin.go 415: Releasing address using handleID ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" HandleID="k8s-pod-network.e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.505 [INFO][6199] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.505 [INFO][6199] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.520 [WARNING][6199] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" HandleID="k8s-pod-network.e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.520 [INFO][6199] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" HandleID="k8s-pod-network.e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Workload="10.67.80.11-k8s-coredns--787d4945fb--np7dd-eth0" Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.523 [INFO][6199] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:50:24.525536 env[1550]: 2024-02-09 09:50:24.524 [INFO][6184] k8s.go 591: Teardown processing complete. ContainerID="e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0" Feb 9 09:50:24.526097 env[1550]: time="2024-02-09T09:50:24.525558276Z" level=info msg="TearDown network for sandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\" successfully" Feb 9 09:50:24.527096 env[1550]: time="2024-02-09T09:50:24.527042341Z" level=info msg="RemovePodSandbox \"e6b7d3af03612a67bcdcfd7e9a65c7dd450d0662664e7805d692d55cfcdb57d0\" returns successfully" Feb 9 09:50:24.527463 env[1550]: time="2024-02-09T09:50:24.527439773Z" level=info msg="StopPodSandbox for \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\"" Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.563 [WARNING][6229] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0", GenerateName:"calico-kube-controllers-cddd66c57-", Namespace:"calico-system", SelfLink:"", UID:"09b20649-bbc0-45d1-af93-aab9a21df100", ResourceVersion:"1604", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cddd66c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3", Pod:"calico-kube-controllers-cddd66c57-2zj4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22b4c084c44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.563 [INFO][6229] k8s.go 578: Cleaning up netns ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.563 [INFO][6229] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" iface="eth0" netns="" Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.564 [INFO][6229] k8s.go 585: Releasing IP address(es) ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.564 [INFO][6229] utils.go 188: Calico CNI releasing IP address ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.607 [INFO][6247] ipam_plugin.go 415: Releasing address using handleID ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" HandleID="k8s-pod-network.f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.608 [INFO][6247] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.608 [INFO][6247] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.622 [WARNING][6247] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" HandleID="k8s-pod-network.f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.622 [INFO][6247] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" HandleID="k8s-pod-network.f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.625 [INFO][6247] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:50:24.627725 env[1550]: 2024-02-09 09:50:24.626 [INFO][6229] k8s.go 591: Teardown processing complete. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:50:24.627725 env[1550]: time="2024-02-09T09:50:24.627708549Z" level=info msg="TearDown network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\" successfully" Feb 9 09:50:24.628423 env[1550]: time="2024-02-09T09:50:24.627740304Z" level=info msg="StopPodSandbox for \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\" returns successfully" Feb 9 09:50:24.628423 env[1550]: time="2024-02-09T09:50:24.628155653Z" level=info msg="RemovePodSandbox for \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\"" Feb 9 09:50:24.628423 env[1550]: time="2024-02-09T09:50:24.628191794Z" level=info msg="Forcibly stopping sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\"" Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.661 [WARNING][6276] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0", GenerateName:"calico-kube-controllers-cddd66c57-", Namespace:"calico-system", SelfLink:"", UID:"09b20649-bbc0-45d1-af93-aab9a21df100", ResourceVersion:"1604", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cddd66c57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3", Pod:"calico-kube-controllers-cddd66c57-2zj4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22b4c084c44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.662 [INFO][6276] k8s.go 578: Cleaning up netns ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.662 [INFO][6276] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" iface="eth0" netns="" Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.662 [INFO][6276] k8s.go 585: Releasing IP address(es) ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.662 [INFO][6276] utils.go 188: Calico CNI releasing IP address ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.681 [INFO][6290] ipam_plugin.go 415: Releasing address using handleID ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" HandleID="k8s-pod-network.f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.682 [INFO][6290] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.682 [INFO][6290] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.688 [WARNING][6290] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" HandleID="k8s-pod-network.f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.688 [INFO][6290] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" HandleID="k8s-pod-network.f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Workload="10.67.80.11-k8s-calico--kube--controllers--cddd66c57--2zj4d-eth0" Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.690 [INFO][6290] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:50:24.692978 env[1550]: 2024-02-09 09:50:24.691 [INFO][6276] k8s.go 591: Teardown processing complete. ContainerID="f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2" Feb 9 09:50:24.693688 env[1550]: time="2024-02-09T09:50:24.692971963Z" level=info msg="TearDown network for sandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\" successfully" Feb 9 09:50:24.694860 env[1550]: time="2024-02-09T09:50:24.694806439Z" level=info msg="RemovePodSandbox \"f7bb075776bdae8f77387f537e02d438a584c802857d2614fa988675407d30c2\" returns successfully" Feb 9 09:50:24.695182 env[1550]: time="2024-02-09T09:50:24.695154371Z" level=info msg="StopPodSandbox for \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\"" Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.743 [WARNING][6323] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-csi--node--driver--s5z52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8", ResourceVersion:"1656", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e", Pod:"csi-node-driver-s5z52", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali326eda024d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.743 [INFO][6323] k8s.go 578: Cleaning up netns ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.743 [INFO][6323] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" iface="eth0" netns="" Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.743 [INFO][6323] k8s.go 585: Releasing IP address(es) ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.743 [INFO][6323] utils.go 188: Calico CNI releasing IP address ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.768 [INFO][6340] ipam_plugin.go 415: Releasing address using handleID ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" HandleID="k8s-pod-network.2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.768 [INFO][6340] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.768 [INFO][6340] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.783 [WARNING][6340] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" HandleID="k8s-pod-network.2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.783 [INFO][6340] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" HandleID="k8s-pod-network.2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.786 [INFO][6340] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:50:24.789419 env[1550]: 2024-02-09 09:50:24.788 [INFO][6323] k8s.go 591: Teardown processing complete. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:50:24.790237 env[1550]: time="2024-02-09T09:50:24.789419541Z" level=info msg="TearDown network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\" successfully" Feb 9 09:50:24.790237 env[1550]: time="2024-02-09T09:50:24.789467218Z" level=info msg="StopPodSandbox for \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\" returns successfully" Feb 9 09:50:24.790237 env[1550]: time="2024-02-09T09:50:24.789915032Z" level=info msg="RemovePodSandbox for \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\"" Feb 9 09:50:24.790237 env[1550]: time="2024-02-09T09:50:24.789958249Z" level=info msg="Forcibly stopping sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\"" Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.835 [WARNING][6373] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-csi--node--driver--s5z52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"94d477a4-f5c6-47a2-8b9f-f1bf82cb53b8", ResourceVersion:"1656", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 48, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"c99d376741a866b10f856ecab74cc71c940be20fc50d5712b3183b761df4c66e", Pod:"csi-node-driver-s5z52", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali326eda024d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.835 [INFO][6373] k8s.go 578: Cleaning up netns ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.835 [INFO][6373] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" iface="eth0" netns="" Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.835 [INFO][6373] k8s.go 585: Releasing IP address(es) ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.835 [INFO][6373] utils.go 188: Calico CNI releasing IP address ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.864 [INFO][6391] ipam_plugin.go 415: Releasing address using handleID ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" HandleID="k8s-pod-network.2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.864 [INFO][6391] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.864 [INFO][6391] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.881 [WARNING][6391] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" HandleID="k8s-pod-network.2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.881 [INFO][6391] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" HandleID="k8s-pod-network.2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Workload="10.67.80.11-k8s-csi--node--driver--s5z52-eth0" Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.884 [INFO][6391] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:50:24.887998 env[1550]: 2024-02-09 09:50:24.885 [INFO][6373] k8s.go 591: Teardown processing complete. ContainerID="2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d" Feb 9 09:50:24.889016 env[1550]: time="2024-02-09T09:50:24.887999178Z" level=info msg="TearDown network for sandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\" successfully" Feb 9 09:50:24.890546 env[1550]: time="2024-02-09T09:50:24.890497925Z" level=info msg="RemovePodSandbox \"2e17d51eff61948326edfd123f085859bde9d9fbd2016d217a56fcd7b306882d\" returns successfully" Feb 9 09:50:24.891180 env[1550]: time="2024-02-09T09:50:24.891099649Z" level=info msg="StopPodSandbox for \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\"" Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.936 [WARNING][6432] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"627c20d1-39af-46cf-8f3a-2d45fb6f84bb", ResourceVersion:"1611", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274", Pod:"coredns-787d4945fb-jwb92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e143e6bfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.936 [INFO][6432] k8s.go 578: Cleaning up netns ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.936 [INFO][6432] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" iface="eth0" netns="" Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.936 [INFO][6432] k8s.go 585: Releasing IP address(es) ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.936 [INFO][6432] utils.go 188: Calico CNI releasing IP address ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.952 [INFO][6456] ipam_plugin.go 415: Releasing address using handleID ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" HandleID="k8s-pod-network.34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.952 [INFO][6456] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.952 [INFO][6456] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.958 [WARNING][6456] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" HandleID="k8s-pod-network.34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.958 [INFO][6456] ipam_plugin.go 443: Releasing address using workloadID ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" HandleID="k8s-pod-network.34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.959 [INFO][6456] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:50:24.961333 env[1550]: 2024-02-09 09:50:24.960 [INFO][6432] k8s.go 591: Teardown processing complete. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:50:24.961799 env[1550]: time="2024-02-09T09:50:24.961326719Z" level=info msg="TearDown network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\" successfully" Feb 9 09:50:24.961799 env[1550]: time="2024-02-09T09:50:24.961349252Z" level=info msg="StopPodSandbox for \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\" returns successfully" Feb 9 09:50:24.961799 env[1550]: time="2024-02-09T09:50:24.961615123Z" level=info msg="RemovePodSandbox for \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\"" Feb 9 09:50:24.961799 env[1550]: time="2024-02-09T09:50:24.961641118Z" level=info msg="Forcibly stopping sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\"" Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:24.990 [WARNING][6491] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"627c20d1-39af-46cf-8f3a-2d45fb6f84bb", ResourceVersion:"1611", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 41, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"392716876f5a12fb47f98d02b898faace2359bd795bc4145dcc29f8e08c49274", Pod:"coredns-787d4945fb-jwb92", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e143e6bfb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:24.990 [INFO][6491] k8s.go 578: Cleaning up netns ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:24.990 [INFO][6491] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" iface="eth0" netns="" Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:24.990 [INFO][6491] k8s.go 585: Releasing IP address(es) ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:24.990 [INFO][6491] utils.go 188: Calico CNI releasing IP address ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:25.004 [INFO][6507] ipam_plugin.go 415: Releasing address using handleID ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" HandleID="k8s-pod-network.34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:25.004 [INFO][6507] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:25.004 [INFO][6507] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:25.018 [WARNING][6507] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" HandleID="k8s-pod-network.34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:25.018 [INFO][6507] ipam_plugin.go 443: Releasing address using workloadID ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" HandleID="k8s-pod-network.34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Workload="10.67.80.11-k8s-coredns--787d4945fb--jwb92-eth0" Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:25.021 [INFO][6507] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:50:25.023608 env[1550]: 2024-02-09 09:50:25.022 [INFO][6491] k8s.go 591: Teardown processing complete. ContainerID="34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298" Feb 9 09:50:25.024070 env[1550]: time="2024-02-09T09:50:25.023605913Z" level=info msg="TearDown network for sandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\" successfully" Feb 9 09:50:25.025017 env[1550]: time="2024-02-09T09:50:25.024958563Z" level=info msg="RemovePodSandbox \"34a3f1adfdde146ee32a79f4e893db65005139607579c9de281ae0fb79481298\" returns successfully" Feb 9 09:50:25.464526 kubelet[2000]: E0209 09:50:25.464377 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:25.534000 audit[5865]: USER_AUTH pid=5865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:25.623550 kernel: audit: type=1100 audit(1707472225.534:258): pid=5865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:26.465230 kubelet[2000]: E0209 09:50:26.465130 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:27.465823 kubelet[2000]: E0209 09:50:27.465703 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:27.879838 sshd[5865]: Failed password for root from 218.92.0.22 port 64396 ssh2 Feb 9 09:50:28.466744 kubelet[2000]: E0209 09:50:28.466616 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:28.846518 sshd[5865]: Received disconnect from 218.92.0.22 port 64396:11: [preauth] Feb 9 09:50:28.846518 sshd[5865]: Disconnected from authenticating user root 218.92.0.22 port 64396 [preauth] Feb 9 09:50:28.847087 sshd[5865]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 09:50:28.849151 systemd[1]: sshd@8-139.178.94.23:22-218.92.0.22:64396.service: Deactivated successfully. Feb 9 09:50:28.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.94.23:22-218.92.0.22:64396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:28.940682 kernel: audit: type=1131 audit(1707472228.848:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.94.23:22-218.92.0.22:64396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:28.969536 systemd[1]: Started sshd@9-139.178.94.23:22-218.92.0.22:16742.service. Feb 9 09:50:28.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.94.23:22-218.92.0.22:16742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:29.060564 kernel: audit: type=1130 audit(1707472228.968:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.94.23:22-218.92.0.22:16742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:29.467364 kubelet[2000]: E0209 09:50:29.467285 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:29.917357 sshd[6692]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 09:50:29.916000 audit[6692]: ANOM_LOGIN_FAILURES pid=6692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='pam_faillock uid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:29.917667 sshd[6692]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 09:50:29.916000 audit[6692]: USER_AUTH pid=6692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:30.075471 kernel: audit: type=2100 audit(1707472229.916:261): pid=6692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='pam_faillock uid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:30.075564 kernel: audit: type=1100 audit(1707472229.916:262): pid=6692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:30.468471 kubelet[2000]: E0209 09:50:30.468347 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:30.763023 systemd[1]: Started sshd@10-139.178.94.23:22-218.92.0.34:21953.service. Feb 9 09:50:30.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.94.23:22-218.92.0.34:21953 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:30.856695 kernel: audit: type=1130 audit(1707472230.761:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.94.23:22-218.92.0.34:21953 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:31.469357 kubelet[2000]: E0209 09:50:31.469245 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:31.810218 sshd[6692]: Failed password for root from 218.92.0.22 port 16742 ssh2 Feb 9 09:50:32.469916 kubelet[2000]: E0209 09:50:32.469803 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:33.206000 audit[6692]: USER_AUTH pid=6692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:33.298689 kernel: audit: type=1100 audit(1707472233.206:264): pid=6692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:33.471102 kubelet[2000]: E0209 09:50:33.470912 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:34.471595 kubelet[2000]: E0209 09:50:34.471471 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:35.315599 sshd[6692]: Failed password for root from 218.92.0.22 port 16742 ssh2 Feb 9 09:50:35.471798 kubelet[2000]: E0209 09:50:35.471698 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:36.471934 kubelet[2000]: E0209 09:50:36.471823 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:36.496000 audit[6692]: USER_AUTH pid=6692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:36.589562 kernel: audit: type=1100 audit(1707472236.496:265): pid=6692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:37.472137 kubelet[2000]: E0209 09:50:37.472016 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:38.351121 sshd[6692]: Failed password for root from 218.92.0.22 port 16742 ssh2 Feb 9 09:50:38.472952 kubelet[2000]: E0209 09:50:38.472897 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:39.084040 sshd[6789]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 09:50:39.083000 audit[6789]: USER_AUTH pid=6789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:39.175519 kernel: audit: type=1100 audit(1707472239.083:266): pid=6789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:39.474099 kubelet[2000]: E0209 09:50:39.473999 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:39.788230 sshd[6692]: Received disconnect from 218.92.0.22 port 16742:11: [preauth] Feb 9 09:50:39.788230 sshd[6692]: Disconnected from authenticating user root 218.92.0.22 port 16742 [preauth] Feb 9 09:50:39.788843 sshd[6692]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 09:50:39.790820 systemd[1]: sshd@9-139.178.94.23:22-218.92.0.22:16742.service: Deactivated successfully. Feb 9 09:50:39.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.94.23:22-218.92.0.22:16742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:39.884676 kernel: audit: type=1131 audit(1707472239.791:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.94.23:22-218.92.0.22:16742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:39.945657 systemd[1]: Started sshd@11-139.178.94.23:22-218.92.0.22:27683.service. Feb 9 09:50:39.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.94.23:22-218.92.0.22:27683 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:40.045689 kernel: audit: type=1130 audit(1707472239.945:268): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.94.23:22-218.92.0.22:27683 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:40.474188 kubelet[2000]: E0209 09:50:40.474144 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:40.942368 sshd[7149]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 09:50:40.942000 audit[7149]: USER_AUTH pid=7149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:41.042540 kernel: audit: type=1100 audit(1707472240.942:269): pid=7149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:41.474799 kubelet[2000]: E0209 09:50:41.474715 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:41.683696 sshd[6789]: Failed password for root from 218.92.0.34 port 21953 ssh2 Feb 9 09:50:42.387000 audit[6789]: USER_AUTH pid=6789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:42.475275 kubelet[2000]: E0209 09:50:42.475260 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:42.479544 kernel: audit: type=1100 audit(1707472242.387:270): pid=6789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:42.679830 sshd[7149]: Failed password for root from 218.92.0.22 port 27683 ssh2 Feb 9 09:50:43.475726 kubelet[2000]: E0209 09:50:43.475617 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:44.237000 audit[7149]: USER_AUTH pid=7149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:44.329670 kernel: audit: type=1100 audit(1707472244.237:271): pid=7149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:44.378902 kubelet[2000]: E0209 09:50:44.378794 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:44.476168 kubelet[2000]: E0209 09:50:44.476064 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:44.731805 sshd[6789]: Failed password for root from 218.92.0.34 port 21953 ssh2 Feb 9 09:50:45.476946 kubelet[2000]: E0209 09:50:45.476828 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:45.666852 env[1550]: time="2024-02-09T09:50:45.666817789Z" level=info msg="shim disconnected" id=f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf Feb 9 09:50:45.667091 env[1550]: time="2024-02-09T09:50:45.666856772Z" level=warning msg="cleaning up after shim disconnected" id=f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf namespace=k8s.io Feb 9 09:50:45.667091 env[1550]: time="2024-02-09T09:50:45.666866654Z" level=info msg="cleaning up dead shim" Feb 9 09:50:45.668249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf-rootfs.mount: Deactivated successfully. Feb 9 09:50:45.682391 env[1550]: time="2024-02-09T09:50:45.682365405Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:50:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7378 runtime=io.containerd.runc.v2\n" Feb 9 09:50:45.689000 audit[6789]: USER_AUTH pid=6789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:45.781697 kernel: audit: type=1100 audit(1707472245.689:272): pid=6789 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:46.190529 sshd[7149]: Failed password for root from 218.92.0.22 port 27683 ssh2 Feb 9 09:50:46.382030 kubelet[2000]: I0209 09:50:46.381944 2000 scope.go:115] "RemoveContainer" containerID="f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf" Feb 9 09:50:46.386857 env[1550]: time="2024-02-09T09:50:46.386677176Z" level=info msg="CreateContainer within sandbox \"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:1,}" Feb 9 09:50:46.399411 env[1550]: time="2024-02-09T09:50:46.399372468Z" level=info msg="CreateContainer within sandbox \"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:1,} returns container id \"dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624\"" Feb 9 09:50:46.399785 env[1550]: time="2024-02-09T09:50:46.399725943Z" level=info msg="StartContainer for \"dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624\"" Feb 9 09:50:46.447712 env[1550]: time="2024-02-09T09:50:46.447655162Z" level=info msg="StartContainer for \"dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624\" returns successfully" Feb 9 09:50:46.477825 kubelet[2000]: E0209 09:50:46.477779 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:47.478796 kubelet[2000]: E0209 09:50:47.478745 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:47.531000 audit[7149]: USER_AUTH pid=7149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:47.622539 kernel: audit: type=1100 audit(1707472247.531:273): pid=7149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.22 addr=218.92.0.22 terminal=ssh res=failed' Feb 9 09:50:47.777978 sshd[6789]: Failed password for root from 218.92.0.34 port 21953 ssh2 Feb 9 09:50:48.479641 kubelet[2000]: E0209 09:50:48.479567 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:48.993439 sshd[6789]: Received disconnect from 218.92.0.34 port 21953:11: [preauth] Feb 9 09:50:48.993439 sshd[6789]: Disconnected from authenticating user root 218.92.0.34 port 21953 [preauth] Feb 9 09:50:48.994005 sshd[6789]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 09:50:48.996151 systemd[1]: sshd@10-139.178.94.23:22-218.92.0.34:21953.service: Deactivated successfully. Feb 9 09:50:48.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.94.23:22-218.92.0.34:21953 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:49.088695 kernel: audit: type=1131 audit(1707472248.996:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.94.23:22-218.92.0.34:21953 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:49.154489 systemd[1]: Started sshd@12-139.178.94.23:22-218.92.0.34:43629.service. Feb 9 09:50:49.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.94.23:22-218.92.0.34:43629 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:49.229049 sshd[7149]: Failed password for root from 218.92.0.22 port 27683 ssh2 Feb 9 09:50:49.246698 kernel: audit: type=1130 audit(1707472249.154:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.94.23:22-218.92.0.34:43629 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:49.480083 kubelet[2000]: E0209 09:50:49.479975 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:50.197977 sshd[7617]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 09:50:50.197000 audit[7617]: USER_AUTH pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:50.290554 kernel: audit: type=1100 audit(1707472250.197:276): pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:50.481379 kubelet[2000]: E0209 09:50:50.481184 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:51.014384 sshd[7149]: Received disconnect from 218.92.0.22 port 27683:11: [preauth] Feb 9 09:50:51.014384 sshd[7149]: Disconnected from authenticating user root 218.92.0.22 port 27683 [preauth] Feb 9 09:50:51.014553 sshd[7149]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 9 09:50:51.014986 systemd[1]: sshd@11-139.178.94.23:22-218.92.0.22:27683.service: Deactivated successfully. Feb 9 09:50:51.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.94.23:22-218.92.0.22:27683 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:51.106570 kernel: audit: type=1131 audit(1707472251.014:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.94.23:22-218.92.0.22:27683 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:50:51.482356 kubelet[2000]: E0209 09:50:51.482276 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:52.482569 kubelet[2000]: E0209 09:50:52.482451 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:52.642640 sshd[7617]: Failed password for root from 218.92.0.34 port 43629 ssh2 Feb 9 09:50:53.483697 kubelet[2000]: E0209 09:50:53.483577 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:53.501000 audit[7617]: USER_AUTH pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:53.593678 kernel: audit: type=1100 audit(1707472253.501:278): pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:54.484702 kubelet[2000]: E0209 09:50:54.484582 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:55.484843 kubelet[2000]: E0209 09:50:55.484732 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:55.690692 sshd[7617]: Failed password for root from 218.92.0.34 port 43629 ssh2 Feb 9 09:50:56.485598 kubelet[2000]: E0209 09:50:56.485528 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:56.805000 audit[7617]: USER_AUTH pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:56.896557 kernel: audit: type=1100 audit(1707472256.805:279): pid=7617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:50:57.486264 kubelet[2000]: E0209 09:50:57.486210 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:58.486605 kubelet[2000]: E0209 09:50:58.486470 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:58.737748 sshd[7617]: Failed password for root from 218.92.0.34 port 43629 ssh2 Feb 9 09:50:59.487069 kubelet[2000]: E0209 09:50:59.486917 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:00.109775 sshd[7617]: Received disconnect from 218.92.0.34 port 43629:11: [preauth] Feb 9 09:51:00.109775 sshd[7617]: Disconnected from authenticating user root 218.92.0.34 port 43629 [preauth] Feb 9 09:51:00.110345 sshd[7617]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 09:51:00.112381 systemd[1]: sshd@12-139.178.94.23:22-218.92.0.34:43629.service: Deactivated successfully. Feb 9 09:51:00.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.94.23:22-218.92.0.34:43629 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:51:00.205558 kernel: audit: type=1131 audit(1707472260.112:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.94.23:22-218.92.0.34:43629 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:51:00.249949 systemd[1]: Started sshd@13-139.178.94.23:22-218.92.0.34:61300.service. Feb 9 09:51:00.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.94.23:22-218.92.0.34:61300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:51:00.340543 kernel: audit: type=1130 audit(1707472260.249:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.94.23:22-218.92.0.34:61300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:51:00.487571 kubelet[2000]: E0209 09:51:00.487455 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:01.233995 sshd[8069]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 09:51:01.233000 audit[8069]: USER_AUTH pid=8069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:51:01.326701 kernel: audit: type=1100 audit(1707472261.233:282): pid=8069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:51:01.488295 kubelet[2000]: E0209 09:51:01.488166 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:02.489203 kubelet[2000]: E0209 09:51:02.489152 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:03.490204 kubelet[2000]: E0209 09:51:03.490185 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:03.853882 sshd[8069]: Failed password for root from 218.92.0.34 port 61300 ssh2 Feb 9 09:51:04.379093 kubelet[2000]: E0209 09:51:04.379030 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:04.490667 kubelet[2000]: E0209 09:51:04.490559 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:04.527000 audit[8069]: USER_AUTH pid=8069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:51:04.620686 kernel: audit: type=1100 audit(1707472264.527:283): pid=8069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:51:05.491572 kubelet[2000]: E0209 09:51:05.491447 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:06.225022 sshd[8069]: Failed password for root from 218.92.0.34 port 61300 ssh2 Feb 9 09:51:06.492292 kubelet[2000]: E0209 09:51:06.492241 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:07.493017 kubelet[2000]: E0209 09:51:07.492908 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:07.823000 audit[8069]: USER_AUTH pid=8069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:51:07.923542 kernel: audit: type=1100 audit(1707472267.823:284): pid=8069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.34 addr=218.92.0.34 terminal=ssh res=failed' Feb 9 09:51:08.493229 kubelet[2000]: E0209 09:51:08.493177 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:09.493412 kubelet[2000]: E0209 09:51:09.493307 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:09.932271 sshd[8069]: Failed password for root from 218.92.0.34 port 61300 ssh2 Feb 9 09:51:10.494715 kubelet[2000]: E0209 09:51:10.494614 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:11.118233 sshd[8069]: Received disconnect from 218.92.0.34 port 61300:11: [preauth] Feb 9 09:51:11.118233 sshd[8069]: Disconnected from authenticating user root 218.92.0.34 port 61300 [preauth] Feb 9 09:51:11.118809 sshd[8069]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.34 user=root Feb 9 09:51:11.120786 systemd[1]: sshd@13-139.178.94.23:22-218.92.0.34:61300.service: Deactivated successfully. Feb 9 09:51:11.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.94.23:22-218.92.0.34:61300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:51:11.214819 kernel: audit: type=1131 audit(1707472271.120:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.94.23:22-218.92.0.34:61300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:51:11.495365 kubelet[2000]: E0209 09:51:11.495257 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:12.496155 kubelet[2000]: E0209 09:51:12.496033 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:13.497434 kubelet[2000]: E0209 09:51:13.497303 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:14.498513 kubelet[2000]: E0209 09:51:14.498372 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:15.498678 kubelet[2000]: E0209 09:51:15.498572 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:16.498785 kubelet[2000]: E0209 09:51:16.498709 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:17.499168 kubelet[2000]: E0209 09:51:17.499061 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:18.500175 kubelet[2000]: E0209 09:51:18.500062 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:19.500371 kubelet[2000]: E0209 09:51:19.500257 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:20.501539 kubelet[2000]: E0209 09:51:20.501435 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:21.501729 kubelet[2000]: E0209 09:51:21.501614 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:22.502370 kubelet[2000]: E0209 09:51:22.502259 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:23.503350 kubelet[2000]: E0209 09:51:23.503297 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:24.379455 kubelet[2000]: E0209 09:51:24.379332 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:24.504269 kubelet[2000]: E0209 09:51:24.504152 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:25.505004 kubelet[2000]: E0209 09:51:25.504902 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:26.505514 kubelet[2000]: E0209 09:51:26.505450 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:27.506034 kubelet[2000]: E0209 09:51:27.505924 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:28.506607 kubelet[2000]: E0209 09:51:28.506496 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:29.507727 kubelet[2000]: E0209 09:51:29.507660 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:30.508315 kubelet[2000]: E0209 09:51:30.508194 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:30.604469 systemd[1]: Started sshd@14-139.178.94.23:22-202.120.37.249:55488.service. Feb 9 09:51:30.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.94.23:22-202.120.37.249:55488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:51:30.697684 kernel: audit: type=1130 audit(1707472290.603:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.94.23:22-202.120.37.249:55488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:51:31.508732 kubelet[2000]: E0209 09:51:31.508620 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:32.509109 kubelet[2000]: E0209 09:51:32.509035 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:33.509248 kubelet[2000]: E0209 09:51:33.509176 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:34.510118 kubelet[2000]: E0209 09:51:34.510054 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:35.510553 kubelet[2000]: E0209 09:51:35.510535 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:36.510974 kubelet[2000]: E0209 09:51:36.510918 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:37.511093 kubelet[2000]: E0209 09:51:37.511031 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:38.511923 kubelet[2000]: E0209 09:51:38.511805 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:39.512727 kubelet[2000]: E0209 09:51:39.512669 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:40.513854 kubelet[2000]: E0209 09:51:40.513738 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:41.515029 kubelet[2000]: E0209 09:51:41.514909 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:42.052960 kubelet[2000]: I0209 09:51:42.052864 2000 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:51:42.216654 kubelet[2000]: I0209 09:51:42.216542 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6m8\" (UniqueName: \"kubernetes.io/projected/6af4c0ce-04b7-466c-84fa-6e798ffe7c50-kube-api-access-bz6m8\") pod \"nginx-deployment-8ffc5cf85-bnq2q\" (UID: \"6af4c0ce-04b7-466c-84fa-6e798ffe7c50\") " pod="default/nginx-deployment-8ffc5cf85-bnq2q" Feb 9 09:51:42.359074 env[1550]: time="2024-02-09T09:51:42.359002447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-bnq2q,Uid:6af4c0ce-04b7-466c-84fa-6e798ffe7c50,Namespace:default,Attempt:0,}" Feb 9 09:51:42.515907 kubelet[2000]: E0209 09:51:42.515849 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:42.542603 systemd-networkd[1407]: cali34836848821: Link UP Feb 9 09:51:42.600683 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:51:42.600765 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali34836848821: link becomes ready Feb 9 09:51:42.600795 systemd-networkd[1407]: cali34836848821: Gained carrier Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.372 [INFO][9687] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.395 [INFO][9687] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0 nginx-deployment-8ffc5cf85- default 6af4c0ce-04b7-466c-84fa-6e798ffe7c50 1913 0 2024-02-09 09:51:42 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.67.80.11 nginx-deployment-8ffc5cf85-bnq2q eth0 default [] [] [kns.default ksa.default.default] cali34836848821 [] []}} ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bnq2q" WorkloadEndpoint="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.395 [INFO][9687] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bnq2q" WorkloadEndpoint="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.460 [INFO][9709] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" HandleID="k8s-pod-network.79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Workload="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.483 [INFO][9709] ipam_plugin.go 268: Auto assigning IP ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" HandleID="k8s-pod-network.79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Workload="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000320160), Attrs:map[string]string{"namespace":"default", "node":"10.67.80.11", "pod":"nginx-deployment-8ffc5cf85-bnq2q", "timestamp":"2024-02-09 09:51:42.460889334 +0000 UTC"}, Hostname:"10.67.80.11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.483 [INFO][9709] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.483 [INFO][9709] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.483 [INFO][9709] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.11' Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.487 [INFO][9709] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" host="10.67.80.11" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.495 [INFO][9709] ipam.go 372: Looking up existing affinities for host host="10.67.80.11" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.503 [INFO][9709] ipam.go 489: Trying affinity for 192.168.5.64/26 host="10.67.80.11" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.508 [INFO][9709] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.513 [INFO][9709] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.513 [INFO][9709] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" host="10.67.80.11" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.517 [INFO][9709] ipam.go 1682: Creating new handle: k8s-pod-network.79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769 Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.524 [INFO][9709] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" host="10.67.80.11" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.535 [INFO][9709] ipam.go 1216: Successfully claimed IPs: [192.168.5.69/26] block=192.168.5.64/26 handle="k8s-pod-network.79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" host="10.67.80.11" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.535 [INFO][9709] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.69/26] handle="k8s-pod-network.79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" host="10.67.80.11" Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.536 [INFO][9709] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:51:42.610696 env[1550]: 2024-02-09 09:51:42.536 [INFO][9709] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.5.69/26] IPv6=[] ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" HandleID="k8s-pod-network.79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Workload="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" Feb 9 09:51:42.611337 env[1550]: 2024-02-09 09:51:42.539 [INFO][9687] k8s.go 385: Populated endpoint ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bnq2q" WorkloadEndpoint="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"6af4c0ce-04b7-466c-84fa-6e798ffe7c50", ResourceVersion:"1913", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-bnq2q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali34836848821", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:51:42.611337 env[1550]: 2024-02-09 09:51:42.539 [INFO][9687] k8s.go 386: Calico CNI using IPs: [192.168.5.69/32] ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bnq2q" WorkloadEndpoint="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" Feb 9 09:51:42.611337 env[1550]: 2024-02-09 09:51:42.539 [INFO][9687] dataplane_linux.go 68: Setting the host side veth name to cali34836848821 ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bnq2q" WorkloadEndpoint="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" Feb 9 09:51:42.611337 env[1550]: 2024-02-09 09:51:42.600 [INFO][9687] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bnq2q" WorkloadEndpoint="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" Feb 9 09:51:42.611337 env[1550]: 2024-02-09 09:51:42.600 [INFO][9687] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bnq2q" WorkloadEndpoint="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"6af4c0ce-04b7-466c-84fa-6e798ffe7c50", ResourceVersion:"1913", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 51, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769", Pod:"nginx-deployment-8ffc5cf85-bnq2q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali34836848821", MAC:"92:fb:c2:6d:28:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:51:42.611337 env[1550]: 2024-02-09 09:51:42.609 [INFO][9687] k8s.go 491: Wrote updated endpoint to datastore ContainerID="79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769" Namespace="default" Pod="nginx-deployment-8ffc5cf85-bnq2q" WorkloadEndpoint="10.67.80.11-k8s-nginx--deployment--8ffc5cf85--bnq2q-eth0" Feb 9 09:51:42.621775 env[1550]: time="2024-02-09T09:51:42.621701636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:51:42.621775 env[1550]: time="2024-02-09T09:51:42.621722853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:51:42.621775 env[1550]: time="2024-02-09T09:51:42.621729722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:51:42.621953 env[1550]: time="2024-02-09T09:51:42.621828507Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769 pid=9749 runtime=io.containerd.runc.v2 Feb 9 09:51:42.674844 env[1550]: time="2024-02-09T09:51:42.674819405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-bnq2q,Uid:6af4c0ce-04b7-466c-84fa-6e798ffe7c50,Namespace:default,Attempt:0,} returns sandbox id \"79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769\"" Feb 9 09:51:42.675492 env[1550]: time="2024-02-09T09:51:42.675474905Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:51:43.516904 kubelet[2000]: E0209 09:51:43.516783 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:43.735993 systemd-networkd[1407]: cali34836848821: Gained IPv6LL Feb 9 09:51:44.379366 kubelet[2000]: E0209 09:51:44.379334 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:44.517233 kubelet[2000]: E0209 09:51:44.517119 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:45.518434 kubelet[2000]: E0209 09:51:45.518321 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:46.518613 kubelet[2000]: E0209 09:51:46.518553 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:46.538417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624-rootfs.mount: Deactivated successfully. Feb 9 09:51:46.539252 env[1550]: time="2024-02-09T09:51:46.539215954Z" level=info msg="shim disconnected" id=dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624 Feb 9 09:51:46.539528 env[1550]: time="2024-02-09T09:51:46.539255079Z" level=warning msg="cleaning up after shim disconnected" id=dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624 namespace=k8s.io Feb 9 09:51:46.539528 env[1550]: time="2024-02-09T09:51:46.539264717Z" level=info msg="cleaning up dead shim" Feb 9 09:51:46.557182 env[1550]: time="2024-02-09T09:51:46.557109611Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9941 runtime=io.containerd.runc.v2\n" Feb 9 09:51:47.519744 kubelet[2000]: E0209 09:51:47.519661 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:47.541205 kubelet[2000]: I0209 09:51:47.541180 2000 scope.go:115] "RemoveContainer" containerID="f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf" Feb 9 09:51:47.541475 kubelet[2000]: I0209 09:51:47.541460 2000 scope.go:115] "RemoveContainer" containerID="dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624" Feb 9 09:51:47.541783 kubelet[2000]: E0209 09:51:47.541768 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-kube-controllers pod=calico-kube-controllers-cddd66c57-2zj4d_calico-system(09b20649-bbc0-45d1-af93-aab9a21df100)\"" pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" podUID=09b20649-bbc0-45d1-af93-aab9a21df100 Feb 9 09:51:47.542215 env[1550]: time="2024-02-09T09:51:47.542188546Z" level=info msg="RemoveContainer for \"f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf\"" Feb 9 09:51:47.543853 env[1550]: time="2024-02-09T09:51:47.543808107Z" level=info msg="RemoveContainer for \"f141e3028ad707544e7498509c5f9adc22db4deaf447686d0b8e7599253578cf\" returns successfully" Feb 9 09:51:48.519938 kubelet[2000]: E0209 09:51:48.519820 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:49.520634 kubelet[2000]: E0209 09:51:49.520573 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:50.520911 kubelet[2000]: E0209 09:51:50.520855 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:51.169578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560168088.mount: Deactivated successfully. Feb 9 09:51:51.521579 kubelet[2000]: E0209 09:51:51.521489 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:51.714364 env[1550]: time="2024-02-09T09:51:51.714306954Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:51:51.714903 env[1550]: time="2024-02-09T09:51:51.714863767Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:51:51.715849 env[1550]: time="2024-02-09T09:51:51.715809488Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:51:51.716950 env[1550]: time="2024-02-09T09:51:51.716909190Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:51:51.717200 env[1550]: time="2024-02-09T09:51:51.717154718Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 09:51:51.718195 env[1550]: time="2024-02-09T09:51:51.718140220Z" level=info msg="CreateContainer within sandbox \"79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 09:51:51.721739 env[1550]: time="2024-02-09T09:51:51.721693088Z" level=info msg="CreateContainer within sandbox \"79680b6b3b98f10fcdec4513f293b3a8b4210fb7a86dbbbab83d6e0585b03769\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f12d28dd41f72ca12ad72c5b32ea545f089ab1e19f1cbc66859181d207030102\"" Feb 9 09:51:51.722022 env[1550]: time="2024-02-09T09:51:51.721987200Z" level=info msg="StartContainer for \"f12d28dd41f72ca12ad72c5b32ea545f089ab1e19f1cbc66859181d207030102\"" Feb 9 09:51:51.777537 env[1550]: time="2024-02-09T09:51:51.777476557Z" level=info msg="StartContainer for \"f12d28dd41f72ca12ad72c5b32ea545f089ab1e19f1cbc66859181d207030102\" returns successfully" Feb 9 09:51:52.522211 kubelet[2000]: E0209 09:51:52.522112 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:52.566010 kubelet[2000]: I0209 09:51:52.565907 2000 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-bnq2q" podStartSLOduration=-9.223372026288942e+09 pod.CreationTimestamp="2024-02-09 09:51:42 +0000 UTC" firstStartedPulling="2024-02-09 09:51:42.675350658 +0000 UTC m=+198.534586944" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:51:52.565028842 +0000 UTC m=+208.424265221" watchObservedRunningTime="2024-02-09 09:51:52.565834311 +0000 UTC m=+208.425070642" Feb 9 09:51:53.522904 kubelet[2000]: E0209 09:51:53.522828 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:54.523402 kubelet[2000]: E0209 09:51:54.523290 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:54.988010 kubelet[2000]: I0209 09:51:54.987930 2000 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:51:54.996000 audit[10346]: NETFILTER_CFG table=filter:83 family=2 entries=26 op=nft_register_rule pid=10346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:51:54.996000 audit[10346]: SYSCALL arch=c000003e syscall=46 success=yes exit=13180 a0=3 a1=7ffd84679850 a2=0 a3=7ffd8467983c items=0 ppid=2277 pid=10346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:51:55.099204 kubelet[2000]: I0209 09:51:55.099193 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b24363a9-2571-48f5-a1f6-b8ef1aea2222-data\") pod \"nfs-server-provisioner-0\" (UID: \"b24363a9-2571-48f5-a1f6-b8ef1aea2222\") " pod="default/nfs-server-provisioner-0" Feb 9 09:51:55.099256 kubelet[2000]: I0209 09:51:55.099215 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqg7n\" (UniqueName: \"kubernetes.io/projected/b24363a9-2571-48f5-a1f6-b8ef1aea2222-kube-api-access-hqg7n\") pod \"nfs-server-provisioner-0\" (UID: \"b24363a9-2571-48f5-a1f6-b8ef1aea2222\") " pod="default/nfs-server-provisioner-0" Feb 9 09:51:55.155195 kernel: audit: type=1325 audit(1707472314.996:287): table=filter:83 family=2 entries=26 op=nft_register_rule pid=10346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:51:55.155235 kernel: audit: type=1300 audit(1707472314.996:287): arch=c000003e syscall=46 success=yes exit=13180 a0=3 a1=7ffd84679850 a2=0 a3=7ffd8467983c items=0 ppid=2277 pid=10346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:51:55.155250 kernel: audit: type=1327 audit(1707472314.996:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:51:54.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:51:55.216000 audit[10346]: NETFILTER_CFG table=nat:84 family=2 entries=20 op=nft_register_rule pid=10346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:51:55.216000 audit[10346]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd84679850 a2=0 a3=31030 items=0 ppid=2277 pid=10346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:51:55.294247 env[1550]: time="2024-02-09T09:51:55.294187634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b24363a9-2571-48f5-a1f6-b8ef1aea2222,Namespace:default,Attempt:0,}" Feb 9 09:51:55.372563 kernel: audit: type=1325 audit(1707472315.216:288): table=nat:84 family=2 entries=20 op=nft_register_rule pid=10346 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:51:55.372633 kernel: audit: type=1300 audit(1707472315.216:288): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd84679850 a2=0 a3=31030 items=0 ppid=2277 pid=10346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:51:55.372650 kernel: audit: type=1327 audit(1707472315.216:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:51:55.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:51:55.392437 systemd-networkd[1407]: cali60e51b789ff: Link UP Feb 9 09:51:55.430585 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:51:55.486116 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 9 09:51:55.486200 systemd-networkd[1407]: cali60e51b789ff: Gained carrier Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.308 [INFO][10364] utils.go 100: File /var/lib/calico/mtu does not exist Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.319 [INFO][10364] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.11-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b24363a9-2571-48f5-a1f6-b8ef1aea2222 1977 0 2024-02-09 09:51:54 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.67.80.11 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.11-k8s-nfs--server--provisioner--0-" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.319 [INFO][10364] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.333 [INFO][10382] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" HandleID="k8s-pod-network.7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Workload="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.351 [INFO][10382] ipam_plugin.go 268: Auto assigning IP ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" HandleID="k8s-pod-network.7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Workload="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000543280), Attrs:map[string]string{"namespace":"default", "node":"10.67.80.11", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-09 09:51:55.333520802 +0000 UTC"}, Hostname:"10.67.80.11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.351 [INFO][10382] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.351 [INFO][10382] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.351 [INFO][10382] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.11' Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.355 [INFO][10382] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" host="10.67.80.11" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.362 [INFO][10382] ipam.go 372: Looking up existing affinities for host host="10.67.80.11" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.369 [INFO][10382] ipam.go 489: Trying affinity for 192.168.5.64/26 host="10.67.80.11" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.373 [INFO][10382] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.377 [INFO][10382] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.377 [INFO][10382] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" host="10.67.80.11" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.380 [INFO][10382] ipam.go 1682: Creating new handle: k8s-pod-network.7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.384 [INFO][10382] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" host="10.67.80.11" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.390 [INFO][10382] ipam.go 1216: Successfully claimed IPs: [192.168.5.70/26] block=192.168.5.64/26 handle="k8s-pod-network.7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" host="10.67.80.11" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.390 [INFO][10382] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.70/26] handle="k8s-pod-network.7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" host="10.67.80.11" Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.390 [INFO][10382] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:51:55.501417 env[1550]: 2024-02-09 09:51:55.390 [INFO][10382] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.5.70/26] IPv6=[] ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" HandleID="k8s-pod-network.7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Workload="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" Feb 9 09:51:55.501927 env[1550]: 2024-02-09 09:51:55.391 [INFO][10364] k8s.go 385: Populated endpoint ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b24363a9-2571-48f5-a1f6-b8ef1aea2222", ResourceVersion:"1977", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.5.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:51:55.501927 env[1550]: 2024-02-09 09:51:55.391 [INFO][10364] k8s.go 386: Calico CNI using IPs: [192.168.5.70/32] ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" Feb 9 09:51:55.501927 env[1550]: 2024-02-09 09:51:55.391 [INFO][10364] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" Feb 9 09:51:55.501927 env[1550]: 2024-02-09 09:51:55.486 [INFO][10364] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" Feb 9 09:51:55.502042 env[1550]: 2024-02-09 09:51:55.488 [INFO][10364] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b24363a9-2571-48f5-a1f6-b8ef1aea2222", ResourceVersion:"1977", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.5.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ca:c3:73:f4:10:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:51:55.502042 env[1550]: 2024-02-09 09:51:55.500 [INFO][10364] k8s.go 491: Wrote updated endpoint to datastore ContainerID="7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.80.11-k8s-nfs--server--provisioner--0-eth0" Feb 9 09:51:55.505000 audit[10438]: NETFILTER_CFG table=filter:85 family=2 entries=38 op=nft_register_rule pid=10438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:51:55.506826 env[1550]: time="2024-02-09T09:51:55.506796732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:51:55.506826 env[1550]: time="2024-02-09T09:51:55.506818774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:51:55.506903 env[1550]: time="2024-02-09T09:51:55.506829440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:51:55.506992 env[1550]: time="2024-02-09T09:51:55.506971572Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc pid=10442 runtime=io.containerd.runc.v2 Feb 9 09:51:55.523986 kubelet[2000]: E0209 09:51:55.523945 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:55.505000 audit[10438]: SYSCALL arch=c000003e syscall=46 success=yes exit=13180 a0=3 a1=7ffef7c94740 a2=0 a3=7ffef7c9472c items=0 ppid=2277 pid=10438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:51:55.659947 kernel: audit: type=1325 audit(1707472315.505:289): table=filter:85 family=2 entries=38 op=nft_register_rule pid=10438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:51:55.660021 kernel: audit: type=1300 audit(1707472315.505:289): arch=c000003e syscall=46 success=yes exit=13180 a0=3 a1=7ffef7c94740 a2=0 a3=7ffef7c9472c items=0 ppid=2277 pid=10438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:51:55.660039 kernel: audit: type=1327 audit(1707472315.505:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:51:55.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:51:55.726000 audit[10438]: NETFILTER_CFG table=nat:86 family=2 entries=20 op=nft_register_rule pid=10438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:51:55.726000 audit[10438]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffef7c94740 a2=0 a3=31030 items=0 ppid=2277 pid=10438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:51:55.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:51:55.785520 kernel: audit: type=1325 audit(1707472315.726:290): table=nat:86 family=2 entries=20 op=nft_register_rule pid=10438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:51:55.803056 env[1550]: time="2024-02-09T09:51:55.803003295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b24363a9-2571-48f5-a1f6-b8ef1aea2222,Namespace:default,Attempt:0,} returns sandbox id \"7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc\"" Feb 9 09:51:55.803696 env[1550]: time="2024-02-09T09:51:55.803653223Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 09:51:56.524922 kubelet[2000]: E0209 09:51:56.524857 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:57.047762 systemd-networkd[1407]: cali60e51b789ff: Gained IPv6LL Feb 9 09:51:57.526072 kubelet[2000]: E0209 09:51:57.525987 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:58.526959 kubelet[2000]: E0209 09:51:58.526869 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:59.527253 kubelet[2000]: E0209 09:51:59.527231 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:51:59.654837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421692179.mount: Deactivated successfully. Feb 9 09:52:00.527984 kubelet[2000]: E0209 09:52:00.527932 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:00.827812 env[1550]: time="2024-02-09T09:52:00.827741493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:52:00.828422 env[1550]: time="2024-02-09T09:52:00.828395084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:52:00.829611 env[1550]: time="2024-02-09T09:52:00.829551770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:52:00.830621 env[1550]: time="2024-02-09T09:52:00.830579583Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:52:00.831378 env[1550]: time="2024-02-09T09:52:00.831335610Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 09:52:00.832664 env[1550]: time="2024-02-09T09:52:00.832637308Z" level=info msg="CreateContainer within sandbox \"7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 09:52:00.837154 env[1550]: time="2024-02-09T09:52:00.837137078Z" level=info msg="CreateContainer within sandbox \"7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9\"" Feb 9 09:52:00.837384 env[1550]: time="2024-02-09T09:52:00.837368777Z" level=info msg="StartContainer for \"befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9\"" Feb 9 09:52:00.906549 env[1550]: time="2024-02-09T09:52:00.906504427Z" level=info msg="StartContainer for \"befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9\" returns successfully" Feb 9 09:52:01.528624 kubelet[2000]: E0209 09:52:01.528564 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:01.601955 kubelet[2000]: I0209 09:52:01.601853 2000 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372029252998e+09 pod.CreationTimestamp="2024-02-09 09:51:54 +0000 UTC" firstStartedPulling="2024-02-09 09:51:55.803537737 +0000 UTC m=+211.662774020" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:52:01.601123995 +0000 UTC m=+217.460360340" watchObservedRunningTime="2024-02-09 09:52:01.601777318 +0000 UTC m=+217.461013663" Feb 9 09:52:01.691000 audit[10818]: NETFILTER_CFG table=filter:87 family=2 entries=26 op=nft_register_rule pid=10818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:01.725897 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 09:52:01.725955 kernel: audit: type=1325 audit(1707472321.691:291): table=filter:87 family=2 entries=26 op=nft_register_rule pid=10818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:01.691000 audit[10818]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffdd525cb70 a2=0 a3=7ffdd525cb5c items=0 ppid=2277 pid=10818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:01.880243 kernel: audit: type=1300 audit(1707472321.691:291): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffdd525cb70 a2=0 a3=7ffdd525cb5c items=0 ppid=2277 pid=10818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:01.880297 kernel: audit: type=1327 audit(1707472321.691:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:01.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:01.943000 audit[10818]: NETFILTER_CFG table=nat:88 family=2 entries=104 op=nft_register_chain pid=10818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:01.943000 audit[10818]: SYSCALL arch=c000003e syscall=46 success=yes exit=47292 a0=3 a1=7ffdd525cb70 a2=0 a3=7ffdd525cb5c items=0 ppid=2277 pid=10818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:02.098039 kernel: audit: type=1325 audit(1707472321.943:292): table=nat:88 family=2 entries=104 op=nft_register_chain pid=10818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:02.098102 kernel: audit: type=1300 audit(1707472321.943:292): arch=c000003e syscall=46 success=yes exit=47292 a0=3 a1=7ffdd525cb70 a2=0 a3=7ffdd525cb5c items=0 ppid=2277 pid=10818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:02.098131 kernel: audit: type=1327 audit(1707472321.943:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:01.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:02.528704 kubelet[2000]: E0209 09:52:02.528657 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:02.952175 kubelet[2000]: I0209 09:52:02.951951 2000 scope.go:115] "RemoveContainer" containerID="dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624" Feb 9 09:52:02.957201 env[1550]: time="2024-02-09T09:52:02.957069860Z" level=info msg="CreateContainer within sandbox \"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:2,}" Feb 9 09:52:02.963630 env[1550]: time="2024-02-09T09:52:02.963585747Z" level=info msg="CreateContainer within sandbox \"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:2,} returns container id \"8b340adab8f5b0c2e94c7ab2d2714e68e0085f7071d88709b92cc1b2001c963b\"" Feb 9 09:52:02.963797 env[1550]: time="2024-02-09T09:52:02.963778845Z" level=info msg="StartContainer for \"8b340adab8f5b0c2e94c7ab2d2714e68e0085f7071d88709b92cc1b2001c963b\"" Feb 9 09:52:03.034810 env[1550]: time="2024-02-09T09:52:03.034742150Z" level=info msg="StartContainer for \"8b340adab8f5b0c2e94c7ab2d2714e68e0085f7071d88709b92cc1b2001c963b\" returns successfully" Feb 9 09:52:03.529019 kubelet[2000]: E0209 09:52:03.528916 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:04.379900 kubelet[2000]: E0209 09:52:04.379787 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:04.530235 kubelet[2000]: E0209 09:52:04.530121 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:05.531441 kubelet[2000]: E0209 09:52:05.531367 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:06.531701 kubelet[2000]: E0209 09:52:06.531595 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:07.532888 kubelet[2000]: E0209 09:52:07.532806 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:08.533938 kubelet[2000]: E0209 09:52:08.533860 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:09.534646 kubelet[2000]: E0209 09:52:09.534537 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:10.535639 kubelet[2000]: E0209 09:52:10.535527 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:11.536050 kubelet[2000]: E0209 09:52:11.535939 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:12.537115 kubelet[2000]: E0209 09:52:12.537094 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:13.538082 kubelet[2000]: E0209 09:52:13.537956 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:14.538304 kubelet[2000]: E0209 09:52:14.538219 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:15.538756 kubelet[2000]: E0209 09:52:15.538645 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:16.539708 kubelet[2000]: E0209 09:52:16.539587 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:17.220008 systemd[1]: Started sshd@15-139.178.94.23:22-141.98.11.11:24830.service. Feb 9 09:52:17.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.94.23:22-141.98.11.11:24830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:52:17.309551 kernel: audit: type=1130 audit(1707472337.219:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.94.23:22-141.98.11.11:24830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:52:17.540050 kubelet[2000]: E0209 09:52:17.539941 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:18.190495 sshd[11519]: Invalid user sshadmin from 141.98.11.11 port 24830 Feb 9 09:52:18.390050 sshd[11519]: pam_faillock(sshd:auth): User unknown Feb 9 09:52:18.391020 sshd[11519]: pam_unix(sshd:auth): check pass; user unknown Feb 9 09:52:18.391108 sshd[11519]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.11 Feb 9 09:52:18.392130 sshd[11519]: pam_faillock(sshd:auth): User unknown Feb 9 09:52:18.391000 audit[11519]: USER_AUTH pid=11519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="sshadmin" exe="/usr/sbin/sshd" hostname=141.98.11.11 addr=141.98.11.11 terminal=ssh res=failed' Feb 9 09:52:18.481697 kernel: audit: type=1100 audit(1707472338.391:294): pid=11519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="sshadmin" exe="/usr/sbin/sshd" hostname=141.98.11.11 addr=141.98.11.11 terminal=ssh res=failed' Feb 9 09:52:18.540301 kubelet[2000]: E0209 09:52:18.540184 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:19.541317 kubelet[2000]: E0209 09:52:19.541240 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:20.248678 sshd[11519]: Failed password for invalid user sshadmin from 141.98.11.11 port 24830 ssh2 Feb 9 09:52:20.541828 kubelet[2000]: E0209 09:52:20.541752 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:21.404463 sshd[11519]: Connection closed by invalid user sshadmin 141.98.11.11 port 24830 [preauth] Feb 9 09:52:21.406938 systemd[1]: sshd@15-139.178.94.23:22-141.98.11.11:24830.service: Deactivated successfully. Feb 9 09:52:21.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.94.23:22-141.98.11.11:24830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:52:21.498692 kernel: audit: type=1131 audit(1707472341.407:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.94.23:22-141.98.11.11:24830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:52:21.542062 kubelet[2000]: E0209 09:52:21.541950 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:22.542979 kubelet[2000]: E0209 09:52:22.542856 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:23.543840 kubelet[2000]: E0209 09:52:23.543730 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:24.379509 kubelet[2000]: E0209 09:52:24.379404 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:24.543986 kubelet[2000]: E0209 09:52:24.543870 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:25.545252 kubelet[2000]: E0209 09:52:25.545138 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:26.546041 kubelet[2000]: E0209 09:52:26.545965 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:27.546770 kubelet[2000]: E0209 09:52:27.546664 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:28.546936 kubelet[2000]: E0209 09:52:28.546857 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:29.548041 kubelet[2000]: E0209 09:52:29.547926 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:30.549055 kubelet[2000]: E0209 09:52:30.548936 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:31.550283 kubelet[2000]: E0209 09:52:31.550180 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:32.551318 kubelet[2000]: E0209 09:52:32.551203 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:33.552103 kubelet[2000]: E0209 09:52:33.551989 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:34.553229 kubelet[2000]: E0209 09:52:34.553112 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:35.554506 kubelet[2000]: E0209 09:52:35.554369 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:35.983407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9-rootfs.mount: Deactivated successfully. Feb 9 09:52:36.000458 env[1550]: time="2024-02-09T09:52:36.000404882Z" level=info msg="shim disconnected" id=befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9 Feb 9 09:52:36.000907 env[1550]: time="2024-02-09T09:52:36.000464519Z" level=warning msg="cleaning up after shim disconnected" id=befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9 namespace=k8s.io Feb 9 09:52:36.000907 env[1550]: time="2024-02-09T09:52:36.000487943Z" level=info msg="cleaning up dead shim" Feb 9 09:52:36.022968 env[1550]: time="2024-02-09T09:52:36.022878854Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:52:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=12277 runtime=io.containerd.runc.v2\n" Feb 9 09:52:36.555226 kubelet[2000]: E0209 09:52:36.555164 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:36.689761 kubelet[2000]: I0209 09:52:36.689677 2000 scope.go:115] "RemoveContainer" containerID="befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9" Feb 9 09:52:36.694977 env[1550]: time="2024-02-09T09:52:36.694854662Z" level=info msg="CreateContainer within sandbox \"7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:1,}" Feb 9 09:52:36.709519 env[1550]: time="2024-02-09T09:52:36.709486663Z" level=info msg="CreateContainer within sandbox \"7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:1,} returns container id \"144545b27c67390057ec5d09dbdefb1107b954c1e00392f73797d9a3707df1d2\"" Feb 9 09:52:36.709898 env[1550]: time="2024-02-09T09:52:36.709882676Z" level=info msg="StartContainer for \"144545b27c67390057ec5d09dbdefb1107b954c1e00392f73797d9a3707df1d2\"" Feb 9 09:52:36.711689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646231460.mount: Deactivated successfully. Feb 9 09:52:36.726000 audit[12355]: NETFILTER_CFG table=filter:89 family=2 entries=26 op=nft_register_rule pid=12355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:36.726000 audit[12355]: SYSCALL arch=c000003e syscall=46 success=yes exit=13180 a0=3 a1=7ffd45244070 a2=0 a3=7ffd4524405c items=0 ppid=2277 pid=12355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:36.882768 kernel: audit: type=1325 audit(1707472356.726:296): table=filter:89 family=2 entries=26 op=nft_register_rule pid=12355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:36.882855 kernel: audit: type=1300 audit(1707472356.726:296): arch=c000003e syscall=46 success=yes exit=13180 a0=3 a1=7ffd45244070 a2=0 a3=7ffd4524405c items=0 ppid=2277 pid=12355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:36.882873 kernel: audit: type=1327 audit(1707472356.726:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:36.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:36.940459 kernel: audit: type=1325 audit(1707472356.726:297): table=nat:90 family=2 entries=104 op=nft_unregister_chain pid=12355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:36.726000 audit[12355]: NETFILTER_CFG table=nat:90 family=2 entries=104 op=nft_unregister_chain pid=12355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:36.999099 kernel: audit: type=1300 audit(1707472356.726:297): arch=c000003e syscall=46 success=yes exit=8412 a0=3 a1=7ffd45244070 a2=0 a3=7ffd4524405c items=0 ppid=2277 pid=12355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:36.726000 audit[12355]: SYSCALL arch=c000003e syscall=46 success=yes exit=8412 a0=3 a1=7ffd45244070 a2=0 a3=7ffd4524405c items=0 ppid=2277 pid=12355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:37.096084 kernel: audit: type=1327 audit(1707472356.726:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:36.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:37.168417 env[1550]: time="2024-02-09T09:52:37.168394022Z" level=info msg="StartContainer for \"144545b27c67390057ec5d09dbdefb1107b954c1e00392f73797d9a3707df1d2\" returns successfully" Feb 9 09:52:37.556241 kubelet[2000]: E0209 09:52:37.556177 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:37.798000 audit[12472]: NETFILTER_CFG table=filter:91 family=2 entries=26 op=nft_register_rule pid=12472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:37.798000 audit[12472]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff05965630 a2=0 a3=7fff0596561c items=0 ppid=2277 pid=12472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:37.965039 kernel: audit: type=1325 audit(1707472357.798:298): table=filter:91 family=2 entries=26 op=nft_register_rule pid=12472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:37.965112 kernel: audit: type=1300 audit(1707472357.798:298): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fff05965630 a2=0 a3=7fff0596561c items=0 ppid=2277 pid=12472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:37.965129 kernel: audit: type=1327 audit(1707472357.798:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:37.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:37.808000 audit[12472]: NETFILTER_CFG table=nat:92 family=2 entries=104 op=nft_register_chain pid=12472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:37.808000 audit[12472]: SYSCALL arch=c000003e syscall=46 success=yes exit=47292 a0=3 a1=7fff05965630 a2=0 a3=7fff0596561c items=0 ppid=2277 pid=12472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:37.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:38.094685 kernel: audit: type=1325 audit(1707472357.808:299): table=nat:92 family=2 entries=104 op=nft_register_chain pid=12472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:38.556448 kubelet[2000]: E0209 09:52:38.556317 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:38.909233 systemd[1]: Started sshd@16-139.178.94.23:22-218.92.0.112:26159.service. Feb 9 09:52:38.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.94.23:22-218.92.0.112:26159 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:52:39.557142 kubelet[2000]: E0209 09:52:39.557017 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:40.558194 kubelet[2000]: E0209 09:52:40.558124 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:41.559341 kubelet[2000]: E0209 09:52:41.559264 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:42.559799 kubelet[2000]: E0209 09:52:42.559686 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:43.559968 kubelet[2000]: E0209 09:52:43.559887 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:44.378695 kubelet[2000]: E0209 09:52:44.378587 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:44.560550 kubelet[2000]: E0209 09:52:44.560435 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:45.467632 sshd[12513]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 9 09:52:45.466000 audit[12513]: USER_AUTH pid=12513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:52:45.495714 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 9 09:52:45.495779 kernel: audit: type=1100 audit(1707472365.466:301): pid=12513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:52:45.561187 kubelet[2000]: E0209 09:52:45.561172 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:45.526000 audit[12827]: NETFILTER_CFG table=filter:93 family=2 entries=13 op=nft_register_rule pid=12827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:45.646354 kernel: audit: type=1325 audit(1707472365.526:302): table=filter:93 family=2 entries=13 op=nft_register_rule pid=12827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:45.646388 kernel: audit: type=1300 audit(1707472365.526:302): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffefe86cfc0 a2=0 a3=7ffefe86cfac items=0 ppid=2277 pid=12827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:45.526000 audit[12827]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffefe86cfc0 a2=0 a3=7ffefe86cfac items=0 ppid=2277 pid=12827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:45.744770 kernel: audit: type=1327 audit(1707472365.526:302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:45.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:45.527000 audit[12827]: NETFILTER_CFG table=nat:94 family=2 entries=147 op=nft_register_chain pid=12827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:45.527000 audit[12827]: SYSCALL arch=c000003e syscall=46 success=yes exit=50788 a0=3 a1=7ffefe86cfc0 a2=0 a3=7ffefe86cfac items=0 ppid=2277 pid=12827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:45.964122 kernel: audit: type=1325 audit(1707472365.527:303): table=nat:94 family=2 entries=147 op=nft_register_chain pid=12827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:45.964180 kernel: audit: type=1300 audit(1707472365.527:303): arch=c000003e syscall=46 success=yes exit=50788 a0=3 a1=7ffefe86cfc0 a2=0 a3=7ffefe86cfac items=0 ppid=2277 pid=12827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:45.964196 kernel: audit: type=1327 audit(1707472365.527:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:45.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:46.022000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.022000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.152207 kernel: audit: type=1400 audit(1707472366.022:304): avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.152247 kernel: audit: type=1400 audit(1707472366.022:304): avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.152262 kernel: audit: type=1400 audit(1707472366.022:304): avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.022000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.022000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.022000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.022000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.022000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.022000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.022000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.022000 audit: BPF prog-id=10 op=LOAD Feb 9 09:52:46.022000 audit[12900]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc359b18d0 a2=70 a3=7fb786402000 items=0 ppid=12828 pid=12900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.022000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:52:46.151000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:52:46.151000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.151000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.151000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.151000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.151000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.151000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.151000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.151000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.151000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.151000 audit: BPF prog-id=11 op=LOAD Feb 9 09:52:46.151000 audit[12900]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc359b18d0 a2=70 a3=6e items=0 ppid=12828 pid=12900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.151000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:52:46.215000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc359b1880 a2=70 a3=7ffc359b18d0 items=0 ppid=12828 pid=12900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.215000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit: BPF prog-id=12 op=LOAD Feb 9 09:52:46.215000 audit[12900]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc359b1860 a2=70 a3=7ffc359b18d0 items=0 ppid=12828 pid=12900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.215000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:52:46.215000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc359b1940 a2=70 a3=0 items=0 ppid=12828 pid=12900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.215000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc359b1930 a2=70 a3=0 items=0 ppid=12828 pid=12900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.215000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc359b1970 a2=70 a3=0 items=0 ppid=12828 pid=12900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.215000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { perfmon } for pid=12900 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit[12900]: AVC avc: denied { bpf } for pid=12900 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.215000 audit: BPF prog-id=13 op=LOAD Feb 9 09:52:46.215000 audit[12900]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc359b1890 a2=70 a3=ffffffff items=0 ppid=12828 pid=12900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.215000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:52:46.217000 audit[12940]: AVC avc: denied { bpf } for pid=12940 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.217000 audit[12940]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd4343ab30 a2=70 a3=fff80800 items=0 ppid=12828 pid=12940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.217000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 09:52:46.217000 audit[12940]: AVC avc: denied { bpf } for pid=12940 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:52:46.217000 audit[12940]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd4343aa00 a2=70 a3=3 items=0 ppid=12828 pid=12940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.217000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 09:52:46.228000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:52:46.319000 audit[12996]: NETFILTER_CFG table=mangle:95 family=2 entries=19 op=nft_register_chain pid=12996 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:52:46.319000 audit[12996]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffc6e4fdc00 a2=0 a3=7ffc6e4fdbec items=0 ppid=12828 pid=12996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.319000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:52:46.324000 audit[12995]: NETFILTER_CFG table=raw:96 family=2 entries=19 op=nft_register_chain pid=12995 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:52:46.324000 audit[12995]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffe06195de0 a2=0 a3=56152e2cd000 items=0 ppid=12828 pid=12995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.324000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:52:46.332000 audit[12999]: NETFILTER_CFG table=nat:97 family=2 entries=16 op=nft_register_chain pid=12999 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:52:46.332000 audit[12999]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffee2bdb2a0 a2=0 a3=7ffee2bdb28c items=0 ppid=12828 pid=12999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.332000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:52:46.336000 audit[12998]: NETFILTER_CFG table=filter:98 family=2 entries=221 op=nft_register_chain pid=12998 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:52:46.336000 audit[12998]: SYSCALL arch=c000003e syscall=46 success=yes exit=123184 a0=3 a1=7ffe29b1bfb0 a2=0 a3=5618d4f7b000 items=0 ppid=12828 pid=12998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:46.336000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:52:46.562271 kubelet[2000]: E0209 09:52:46.562246 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:46.844088 systemd-networkd[1407]: vxlan.calico: Link UP Feb 9 09:52:46.844107 systemd-networkd[1407]: vxlan.calico: Gained carrier Feb 9 09:52:47.563117 kubelet[2000]: E0209 09:52:47.563013 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:47.695887 sshd[12513]: Failed password for root from 218.92.0.112 port 26159 ssh2 Feb 9 09:52:48.055802 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL Feb 9 09:52:48.564197 kubelet[2000]: E0209 09:52:48.564084 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:48.777000 audit[12513]: USER_AUTH pid=12513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:52:49.564754 kubelet[2000]: E0209 09:52:49.564627 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:50.565399 kubelet[2000]: E0209 09:52:50.565285 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:50.751897 sshd[12513]: Failed password for root from 218.92.0.112 port 26159 ssh2 Feb 9 09:52:51.566174 kubelet[2000]: E0209 09:52:51.566098 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:51.765000 audit[13054]: NETFILTER_CFG table=filter:99 family=2 entries=9 op=nft_register_rule pid=13054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:51.794063 kernel: kauditd_printk_skb: 81 callbacks suppressed Feb 9 09:52:51.794103 kernel: audit: type=1325 audit(1707472371.765:323): table=filter:99 family=2 entries=9 op=nft_register_rule pid=13054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:51.765000 audit[13054]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffca4896e80 a2=0 a3=7ffca4896e6c items=0 ppid=2277 pid=13054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:51.952884 kernel: audit: type=1300 audit(1707472371.765:323): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffca4896e80 a2=0 a3=7ffca4896e6c items=0 ppid=2277 pid=13054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:51.952920 kernel: audit: type=1327 audit(1707472371.765:323): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:51.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:52.019000 audit[13054]: NETFILTER_CFG table=nat:100 family=2 entries=171 op=nft_register_chain pid=13054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:52.019000 audit[13054]: SYSCALL arch=c000003e syscall=46 success=yes exit=61276 a0=3 a1=7ffca4896e80 a2=0 a3=7ffca4896e6c items=0 ppid=2277 pid=13054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:52.179785 kernel: audit: type=1325 audit(1707472372.019:324): table=nat:100 family=2 entries=171 op=nft_register_chain pid=13054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:52:52.179810 kernel: audit: type=1300 audit(1707472372.019:324): arch=c000003e syscall=46 success=yes exit=61276 a0=3 a1=7ffca4896e80 a2=0 a3=7ffca4896e6c items=0 ppid=2277 pid=13054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:52:52.179823 kernel: audit: type=1327 audit(1707472372.019:324): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:52.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:52:52.239625 kernel: audit: type=1100 audit(1707472372.113:325): pid=12513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:52:52.113000 audit[12513]: USER_AUTH pid=12513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:52:52.566847 kubelet[2000]: E0209 09:52:52.566654 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:53.305248 sshd[12513]: Failed password for root from 218.92.0.112 port 26159 ssh2 Feb 9 09:52:53.567302 kubelet[2000]: E0209 09:52:53.567083 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:53.851965 sshd[12513]: Received disconnect from 218.92.0.112 port 26159:11: [preauth] Feb 9 09:52:53.851965 sshd[12513]: Disconnected from authenticating user root 218.92.0.112 port 26159 [preauth] Feb 9 09:52:53.852503 sshd[12513]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 9 09:52:53.854525 systemd[1]: sshd@16-139.178.94.23:22-218.92.0.112:26159.service: Deactivated successfully. Feb 9 09:52:53.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.94.23:22-218.92.0.112:26159 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:52:53.947697 kernel: audit: type=1131 audit(1707472373.853:326): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.94.23:22-218.92.0.112:26159 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:52:54.014442 systemd[1]: Started sshd@17-139.178.94.23:22-218.92.0.112:57002.service. Feb 9 09:52:54.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.94.23:22-218.92.0.112:57002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:52:54.105547 kernel: audit: type=1130 audit(1707472374.013:327): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.94.23:22-218.92.0.112:57002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:52:54.567906 kubelet[2000]: E0209 09:52:54.567776 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:55.080464 sshd[13059]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 9 09:52:55.079000 audit[13059]: USER_AUTH pid=13059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:52:55.171547 kernel: audit: type=1100 audit(1707472375.079:328): pid=13059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:52:55.568763 kubelet[2000]: E0209 09:52:55.568659 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:56.569553 kubelet[2000]: E0209 09:52:56.569444 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:56.681758 sshd[13059]: Failed password for root from 218.92.0.112 port 57002 ssh2 Feb 9 09:52:57.570606 kubelet[2000]: E0209 09:52:57.570498 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:58.390000 audit[13059]: USER_AUTH pid=13059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:52:58.484790 kernel: audit: type=1100 audit(1707472378.390:329): pid=13059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:52:58.571318 kubelet[2000]: E0209 09:52:58.571251 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:52:59.571679 kubelet[2000]: E0209 09:52:59.571547 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:00.405614 sshd[13059]: Failed password for root from 218.92.0.112 port 57002 ssh2 Feb 9 09:53:00.572775 kubelet[2000]: E0209 09:53:00.572650 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:01.462000 audit[13125]: NETFILTER_CFG table=filter:101 family=2 entries=6 op=nft_register_rule pid=13125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:01.462000 audit[13125]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe24676a50 a2=0 a3=7ffe24676a3c items=0 ppid=2277 pid=13125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:01.572857 kubelet[2000]: E0209 09:53:01.572818 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:01.624835 kernel: audit: type=1325 audit(1707472381.462:330): table=filter:101 family=2 entries=6 op=nft_register_rule pid=13125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:01.624898 kernel: audit: type=1300 audit(1707472381.462:330): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe24676a50 a2=0 a3=7ffe24676a3c items=0 ppid=2277 pid=13125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:01.624911 kernel: audit: type=1327 audit(1707472381.462:330): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:01.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:01.698000 audit[13059]: USER_AUTH pid=13059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:53:01.791486 kernel: audit: type=1100 audit(1707472381.698:331): pid=13059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:53:01.791000 audit[13125]: NETFILTER_CFG table=nat:102 family=2 entries=192 op=nft_register_chain pid=13125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:01.791000 audit[13125]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffe24676a50 a2=0 a3=7ffe24676a3c items=0 ppid=2277 pid=13125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:01.881426 update_engine[1540]: I0209 09:53:01.881387 1540 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 09:53:01.881426 update_engine[1540]: I0209 09:53:01.881404 1540 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 09:53:01.884814 update_engine[1540]: I0209 09:53:01.884797 1540 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 09:53:01.884984 update_engine[1540]: I0209 09:53:01.884956 1540 omaha_request_params.cc:62] Current group set to lts Feb 9 09:53:01.885068 update_engine[1540]: I0209 09:53:01.885025 1540 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 09:53:01.885068 update_engine[1540]: I0209 09:53:01.885027 1540 update_attempter.cc:643] Scheduling an action processor start. Feb 9 09:53:01.885068 update_engine[1540]: I0209 09:53:01.885035 1540 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 09:53:01.885068 update_engine[1540]: I0209 09:53:01.885048 1540 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 09:53:01.885156 update_engine[1540]: I0209 09:53:01.885072 1540 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 09:53:01.885156 update_engine[1540]: I0209 09:53:01.885075 1540 omaha_request_action.cc:271] Request: Feb 9 09:53:01.885156 update_engine[1540]: Feb 9 09:53:01.885156 update_engine[1540]: Feb 9 09:53:01.885156 update_engine[1540]: Feb 9 09:53:01.885156 update_engine[1540]: Feb 9 09:53:01.885156 update_engine[1540]: Feb 9 09:53:01.885156 update_engine[1540]: Feb 9 09:53:01.885156 update_engine[1540]: Feb 9 09:53:01.885156 update_engine[1540]: Feb 9 09:53:01.885156 update_engine[1540]: I0209 09:53:01.885078 1540 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:53:01.885330 locksmithd[1591]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 09:53:01.885695 update_engine[1540]: I0209 09:53:01.885659 1540 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:53:01.885726 update_engine[1540]: E0209 09:53:01.885706 1540 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:53:01.885751 update_engine[1540]: I0209 09:53:01.885735 1540 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 09:53:01.953806 kernel: audit: type=1325 audit(1707472381.791:332): table=nat:102 family=2 entries=192 op=nft_register_chain pid=13125 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:01.953845 kernel: audit: type=1300 audit(1707472381.791:332): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffe24676a50 a2=0 a3=7ffe24676a3c items=0 ppid=2277 pid=13125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:01.953862 kernel: audit: type=1327 audit(1707472381.791:332): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:01.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:02.573083 kubelet[2000]: E0209 09:53:02.572995 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:03.101981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b340adab8f5b0c2e94c7ab2d2714e68e0085f7071d88709b92cc1b2001c963b-rootfs.mount: Deactivated successfully. Feb 9 09:53:03.104171 env[1550]: time="2024-02-09T09:53:03.104138939Z" level=info msg="shim disconnected" id=8b340adab8f5b0c2e94c7ab2d2714e68e0085f7071d88709b92cc1b2001c963b Feb 9 09:53:03.104412 env[1550]: time="2024-02-09T09:53:03.104172688Z" level=warning msg="cleaning up after shim disconnected" id=8b340adab8f5b0c2e94c7ab2d2714e68e0085f7071d88709b92cc1b2001c963b namespace=k8s.io Feb 9 09:53:03.104412 env[1550]: time="2024-02-09T09:53:03.104180395Z" level=info msg="cleaning up dead shim" Feb 9 09:53:03.121815 env[1550]: time="2024-02-09T09:53:03.121785054Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:53:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=13140 runtime=io.containerd.runc.v2\n" Feb 9 09:53:03.573878 kubelet[2000]: E0209 09:53:03.573773 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:03.775789 kubelet[2000]: I0209 09:53:03.775729 2000 scope.go:115] "RemoveContainer" containerID="dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624" Feb 9 09:53:03.776422 kubelet[2000]: I0209 09:53:03.776375 2000 scope.go:115] "RemoveContainer" containerID="8b340adab8f5b0c2e94c7ab2d2714e68e0085f7071d88709b92cc1b2001c963b" Feb 9 09:53:03.777257 kubelet[2000]: E0209 09:53:03.777212 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-kube-controllers pod=calico-kube-controllers-cddd66c57-2zj4d_calico-system(09b20649-bbc0-45d1-af93-aab9a21df100)\"" pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" podUID=09b20649-bbc0-45d1-af93-aab9a21df100 Feb 9 09:53:03.778534 env[1550]: time="2024-02-09T09:53:03.778333234Z" level=info msg="RemoveContainer for \"dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624\"" Feb 9 09:53:03.783129 env[1550]: time="2024-02-09T09:53:03.783009076Z" level=info msg="RemoveContainer for \"dc95c6ce7cd5f2c75918638a302312fc32eef2e37e76ca100a54960b7b3e5624\" returns successfully" Feb 9 09:53:03.792644 sshd[13059]: Failed password for root from 218.92.0.112 port 57002 ssh2 Feb 9 09:53:04.378771 kubelet[2000]: E0209 09:53:04.378652 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:04.574038 kubelet[2000]: E0209 09:53:04.573922 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:05.008986 sshd[13059]: Received disconnect from 218.92.0.112 port 57002:11: [preauth] Feb 9 09:53:05.008986 sshd[13059]: Disconnected from authenticating user root 218.92.0.112 port 57002 [preauth] Feb 9 09:53:05.009234 sshd[13059]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 9 09:53:05.010113 systemd[1]: sshd@17-139.178.94.23:22-218.92.0.112:57002.service: Deactivated successfully. Feb 9 09:53:05.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.94.23:22-218.92.0.112:57002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:53:05.103562 kernel: audit: type=1131 audit(1707472385.008:333): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.94.23:22-218.92.0.112:57002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:53:05.154253 systemd[1]: Started sshd@18-139.178.94.23:22-218.92.0.112:21210.service. Feb 9 09:53:05.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.94.23:22-218.92.0.112:21210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:53:05.246687 kernel: audit: type=1130 audit(1707472385.152:334): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.94.23:22-218.92.0.112:21210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:53:05.574232 kubelet[2000]: E0209 09:53:05.574079 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:06.161704 sshd[13154]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 9 09:53:06.160000 audit[13154]: USER_AUTH pid=13154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:53:06.254679 kernel: audit: type=1100 audit(1707472386.160:335): pid=13154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:53:06.575285 kubelet[2000]: E0209 09:53:06.575175 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:07.576267 kubelet[2000]: E0209 09:53:07.576141 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:07.607754 sshd[13154]: Failed password for root from 218.92.0.112 port 21210 ssh2 Feb 9 09:53:07.888000 audit[13154]: USER_AUTH pid=13154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:53:07.981688 kernel: audit: type=1100 audit(1707472387.888:336): pid=13154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:53:08.577205 kubelet[2000]: E0209 09:53:08.577154 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:09.471137 sshd[13154]: Failed password for root from 218.92.0.112 port 21210 ssh2 Feb 9 09:53:09.577540 kubelet[2000]: E0209 09:53:09.577422 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:10.578099 kubelet[2000]: E0209 09:53:10.577980 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:11.190000 audit[13154]: USER_AUTH pid=13154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:53:11.284542 kernel: audit: type=1100 audit(1707472391.190:337): pid=13154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=218.92.0.112 addr=218.92.0.112 terminal=ssh res=failed' Feb 9 09:53:11.579272 kubelet[2000]: E0209 09:53:11.579177 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:11.821898 update_engine[1540]: I0209 09:53:11.821781 1540 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:53:11.822853 update_engine[1540]: I0209 09:53:11.822252 1540 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:53:11.822853 update_engine[1540]: E0209 09:53:11.822453 1540 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:53:11.822853 update_engine[1540]: I0209 09:53:11.822659 1540 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 09:53:12.246258 env[1550]: time="2024-02-09T09:53:12.246219880Z" level=info msg="shim disconnected" id=144545b27c67390057ec5d09dbdefb1107b954c1e00392f73797d9a3707df1d2 Feb 9 09:53:12.246258 env[1550]: time="2024-02-09T09:53:12.246255856Z" level=warning msg="cleaning up after shim disconnected" id=144545b27c67390057ec5d09dbdefb1107b954c1e00392f73797d9a3707df1d2 namespace=k8s.io Feb 9 09:53:12.246597 env[1550]: time="2024-02-09T09:53:12.246265938Z" level=info msg="cleaning up dead shim" Feb 9 09:53:12.246908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-144545b27c67390057ec5d09dbdefb1107b954c1e00392f73797d9a3707df1d2-rootfs.mount: Deactivated successfully. Feb 9 09:53:12.263779 env[1550]: time="2024-02-09T09:53:12.263742108Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:53:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=13183 runtime=io.containerd.runc.v2\n" Feb 9 09:53:12.580107 kubelet[2000]: E0209 09:53:12.579913 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:12.657781 sshd[13154]: Failed password for root from 218.92.0.112 port 21210 ssh2 Feb 9 09:53:12.813439 kubelet[2000]: I0209 09:53:12.813382 2000 scope.go:115] "RemoveContainer" containerID="befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9" Feb 9 09:53:12.814203 kubelet[2000]: I0209 09:53:12.814157 2000 scope.go:115] "RemoveContainer" containerID="144545b27c67390057ec5d09dbdefb1107b954c1e00392f73797d9a3707df1d2" Feb 9 09:53:12.815183 kubelet[2000]: E0209 09:53:12.815139 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nfs-server-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nfs-server-provisioner pod=nfs-server-provisioner-0_default(b24363a9-2571-48f5-a1f6-b8ef1aea2222)\"" pod="default/nfs-server-provisioner-0" podUID=b24363a9-2571-48f5-a1f6-b8ef1aea2222 Feb 9 09:53:12.816177 env[1550]: time="2024-02-09T09:53:12.816067716Z" level=info msg="RemoveContainer for \"befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9\"" Feb 9 09:53:12.820815 env[1550]: time="2024-02-09T09:53:12.820692448Z" level=info msg="RemoveContainer for \"befaf30543e22a1d0fc60b0d9eede5a3320c0b6558932fb2b52a700a420a1cf9\" returns successfully" Feb 9 09:53:12.919123 sshd[13154]: Received disconnect from 218.92.0.112 port 21210:11: [preauth] Feb 9 09:53:12.919123 sshd[13154]: Disconnected from authenticating user root 218.92.0.112 port 21210 [preauth] Feb 9 09:53:12.919789 sshd[13154]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Feb 9 09:53:12.921891 systemd[1]: sshd@18-139.178.94.23:22-218.92.0.112:21210.service: Deactivated successfully. Feb 9 09:53:12.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.94.23:22-218.92.0.112:21210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:53:12.938000 audit[13222]: NETFILTER_CFG table=filter:103 family=2 entries=18 op=nft_register_rule pid=13222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:13.076567 kernel: audit: type=1131 audit(1707472392.921:338): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.94.23:22-218.92.0.112:21210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:53:13.076604 kernel: audit: type=1325 audit(1707472392.938:339): table=filter:103 family=2 entries=18 op=nft_register_rule pid=13222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:13.076619 kernel: audit: type=1300 audit(1707472392.938:339): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffcf49218d0 a2=0 a3=7ffcf49218bc items=0 ppid=2277 pid=13222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:12.938000 audit[13222]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffcf49218d0 a2=0 a3=7ffcf49218bc items=0 ppid=2277 pid=13222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:13.175867 kernel: audit: type=1327 audit(1707472392.938:339): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:12.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:13.243000 audit[13222]: NETFILTER_CFG table=nat:104 family=2 entries=162 op=nft_unregister_chain pid=13222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:13.243000 audit[13222]: SYSCALL arch=c000003e syscall=46 success=yes exit=28060 a0=3 a1=7ffcf49218d0 a2=0 a3=7ffcf49218bc items=0 ppid=2277 pid=13222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:13.405648 kernel: audit: type=1325 audit(1707472393.243:340): table=nat:104 family=2 entries=162 op=nft_unregister_chain pid=13222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:13.405676 kernel: audit: type=1300 audit(1707472393.243:340): arch=c000003e syscall=46 success=yes exit=28060 a0=3 a1=7ffcf49218d0 a2=0 a3=7ffcf49218bc items=0 ppid=2277 pid=13222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:13.405692 kernel: audit: type=1327 audit(1707472393.243:340): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:13.243000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:13.580530 kubelet[2000]: E0209 09:53:13.580422 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:14.581296 kubelet[2000]: E0209 09:53:14.581176 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:15.582513 kubelet[2000]: E0209 09:53:15.582389 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:16.583388 kubelet[2000]: E0209 09:53:16.583271 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:16.951128 kubelet[2000]: I0209 09:53:16.950919 2000 scope.go:115] "RemoveContainer" containerID="8b340adab8f5b0c2e94c7ab2d2714e68e0085f7071d88709b92cc1b2001c963b" Feb 9 09:53:16.951941 kubelet[2000]: E0209 09:53:16.951861 2000 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-kube-controllers pod=calico-kube-controllers-cddd66c57-2zj4d_calico-system(09b20649-bbc0-45d1-af93-aab9a21df100)\"" pod="calico-system/calico-kube-controllers-cddd66c57-2zj4d" podUID=09b20649-bbc0-45d1-af93-aab9a21df100 Feb 9 09:53:17.584573 kubelet[2000]: E0209 09:53:17.584450 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:18.584940 kubelet[2000]: E0209 09:53:18.584826 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:19.585833 kubelet[2000]: E0209 09:53:19.585713 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:20.586896 kubelet[2000]: E0209 09:53:20.586781 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:21.588067 kubelet[2000]: E0209 09:53:21.587951 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:21.821995 update_engine[1540]: I0209 09:53:21.821874 1540 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:53:21.822924 update_engine[1540]: I0209 09:53:21.822348 1540 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:53:21.822924 update_engine[1540]: E0209 09:53:21.822593 1540 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:53:21.822924 update_engine[1540]: I0209 09:53:21.822764 1540 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 09:53:22.588355 kubelet[2000]: E0209 09:53:22.588248 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:23.589208 kubelet[2000]: E0209 09:53:23.589096 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:24.379643 kubelet[2000]: E0209 09:53:24.379536 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:24.589415 kubelet[2000]: E0209 09:53:24.589331 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:25.590349 kubelet[2000]: E0209 09:53:25.590275 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:26.591244 kubelet[2000]: E0209 09:53:26.591167 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:27.591824 kubelet[2000]: E0209 09:53:27.591706 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:27.951341 kubelet[2000]: I0209 09:53:27.951146 2000 scope.go:115] "RemoveContainer" containerID="144545b27c67390057ec5d09dbdefb1107b954c1e00392f73797d9a3707df1d2" Feb 9 09:53:27.956580 env[1550]: time="2024-02-09T09:53:27.956420773Z" level=info msg="CreateContainer within sandbox \"7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:2,}" Feb 9 09:53:27.968105 env[1550]: time="2024-02-09T09:53:27.968085250Z" level=info msg="CreateContainer within sandbox \"7a67bae3ed79385f0a42af8f94944a0bf0b7c29c39c7f3804a4bb86ee5d38abc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:2,} returns container id \"cc91b56965a5c97a0848f6b9e59cd919920eff2c0e1715ac15d07b3fa55263bf\"" Feb 9 09:53:27.968393 env[1550]: time="2024-02-09T09:53:27.968380003Z" level=info msg="StartContainer for \"cc91b56965a5c97a0848f6b9e59cd919920eff2c0e1715ac15d07b3fa55263bf\"" Feb 9 09:53:28.016410 env[1550]: time="2024-02-09T09:53:28.016340518Z" level=info msg="StartContainer for \"cc91b56965a5c97a0848f6b9e59cd919920eff2c0e1715ac15d07b3fa55263bf\" returns successfully" Feb 9 09:53:28.592556 kubelet[2000]: E0209 09:53:28.592438 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:28.951533 kubelet[2000]: I0209 09:53:28.951324 2000 scope.go:115] "RemoveContainer" containerID="8b340adab8f5b0c2e94c7ab2d2714e68e0085f7071d88709b92cc1b2001c963b" Feb 9 09:53:28.966444 env[1550]: time="2024-02-09T09:53:28.966346414Z" level=info msg="CreateContainer within sandbox \"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:3,}" Feb 9 09:53:28.978544 env[1550]: time="2024-02-09T09:53:28.978411725Z" level=info msg="CreateContainer within sandbox \"384cfa5bfa569cb4517120343454bea0f1fef579579e3f6d2b443ab29e5438e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:3,} returns container id \"2d8892b5874bf4b6b4e4ccae130a6575c9dd99f433ca12d44f34185efdff2c55\"" Feb 9 09:53:28.979322 env[1550]: time="2024-02-09T09:53:28.979212933Z" level=info msg="StartContainer for \"2d8892b5874bf4b6b4e4ccae130a6575c9dd99f433ca12d44f34185efdff2c55\"" Feb 9 09:53:28.993000 audit[13360]: NETFILTER_CFG table=filter:105 family=2 entries=18 op=nft_register_rule pid=13360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:28.993000 audit[13360]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe8587eb40 a2=0 a3=7ffe8587eb2c items=0 ppid=2277 pid=13360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:29.155818 kernel: audit: type=1325 audit(1707472408.993:341): table=filter:105 family=2 entries=18 op=nft_register_rule pid=13360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:29.155875 kernel: audit: type=1300 audit(1707472408.993:341): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe8587eb40 a2=0 a3=7ffe8587eb2c items=0 ppid=2277 pid=13360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:29.155903 kernel: audit: type=1327 audit(1707472408.993:341): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:28.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:29.223000 audit[13360]: NETFILTER_CFG table=nat:106 family=2 entries=162 op=nft_register_chain pid=13360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:29.223000 audit[13360]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffe8587eb40 a2=0 a3=7ffe8587eb2c items=0 ppid=2277 pid=13360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:29.385698 kernel: audit: type=1325 audit(1707472409.223:342): table=nat:106 family=2 entries=162 op=nft_register_chain pid=13360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:53:29.385767 kernel: audit: type=1300 audit(1707472409.223:342): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffe8587eb40 a2=0 a3=7ffe8587eb2c items=0 ppid=2277 pid=13360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:29.385786 kernel: audit: type=1327 audit(1707472409.223:342): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:29.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:53:29.464056 env[1550]: time="2024-02-09T09:53:29.463990185Z" level=info msg="StartContainer for \"2d8892b5874bf4b6b4e4ccae130a6575c9dd99f433ca12d44f34185efdff2c55\" returns successfully" Feb 9 09:53:29.593091 kubelet[2000]: E0209 09:53:29.593010 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:30.593836 kubelet[2000]: E0209 09:53:30.593729 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:30.609781 sshd[9221]: Timeout before authentication for 202.120.37.249 port 55488 Feb 9 09:53:30.611298 systemd[1]: sshd@14-139.178.94.23:22-202.120.37.249:55488.service: Deactivated successfully. Feb 9 09:53:30.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.94.23:22-202.120.37.249:55488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:53:30.705687 kernel: audit: type=1131 audit(1707472410.610:343): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.94.23:22-202.120.37.249:55488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:53:31.594090 kubelet[2000]: E0209 09:53:31.593973 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:31.822037 update_engine[1540]: I0209 09:53:31.821918 1540 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:53:31.823001 update_engine[1540]: I0209 09:53:31.822392 1540 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:53:31.823001 update_engine[1540]: E0209 09:53:31.822693 1540 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:53:31.823001 update_engine[1540]: I0209 09:53:31.822844 1540 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 09:53:31.823001 update_engine[1540]: I0209 09:53:31.822858 1540 omaha_request_action.cc:621] Omaha request response: Feb 9 09:53:31.823001 update_engine[1540]: E0209 09:53:31.822997 1540 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823024 1540 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823033 1540 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823042 1540 update_attempter.cc:306] Processing Done. Feb 9 09:53:31.823521 update_engine[1540]: E0209 09:53:31.823067 1540 update_attempter.cc:619] Update failed. Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823076 1540 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823086 1540 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823094 1540 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823246 1540 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823295 1540 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823305 1540 omaha_request_action.cc:271] Request: Feb 9 09:53:31.823521 update_engine[1540]: Feb 9 09:53:31.823521 update_engine[1540]: Feb 9 09:53:31.823521 update_engine[1540]: Feb 9 09:53:31.823521 update_engine[1540]: Feb 9 09:53:31.823521 update_engine[1540]: Feb 9 09:53:31.823521 update_engine[1540]: Feb 9 09:53:31.823521 update_engine[1540]: I0209 09:53:31.823315 1540 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:53:31.825062 update_engine[1540]: I0209 09:53:31.823632 1540 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:53:31.825062 update_engine[1540]: E0209 09:53:31.823796 1540 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:53:31.825062 update_engine[1540]: I0209 09:53:31.823927 1540 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 09:53:31.825062 update_engine[1540]: I0209 09:53:31.823942 1540 omaha_request_action.cc:621] Omaha request response: Feb 9 09:53:31.825062 update_engine[1540]: I0209 09:53:31.823952 1540 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:53:31.825062 update_engine[1540]: I0209 09:53:31.823961 1540 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:53:31.825062 update_engine[1540]: I0209 09:53:31.823967 1540 update_attempter.cc:306] Processing Done. Feb 9 09:53:31.825062 update_engine[1540]: I0209 09:53:31.823974 1540 update_attempter.cc:310] Error event sent. Feb 9 09:53:31.825062 update_engine[1540]: I0209 09:53:31.823999 1540 update_check_scheduler.cc:74] Next update check in 43m5s Feb 9 09:53:31.825889 locksmithd[1591]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 09:53:31.825889 locksmithd[1591]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 09:53:32.594369 kubelet[2000]: E0209 09:53:32.594261 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:33.595014 kubelet[2000]: E0209 09:53:33.594914 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:34.595624 kubelet[2000]: E0209 09:53:34.595519 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:35.596788 kubelet[2000]: E0209 09:53:35.596672 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:36.597228 kubelet[2000]: E0209 09:53:36.597117 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:37.598029 kubelet[2000]: E0209 09:53:37.597962 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:37.958795 kubelet[2000]: I0209 09:53:37.958580 2000 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:53:38.158655 kubelet[2000]: I0209 09:53:38.158557 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fhr\" (UniqueName: \"kubernetes.io/projected/707bbcff-0fcd-40b9-9cf4-0dec7603f54f-kube-api-access-l5fhr\") pod \"test-pod-1\" (UID: \"707bbcff-0fcd-40b9-9cf4-0dec7603f54f\") " pod="default/test-pod-1" Feb 9 09:53:38.158967 kubelet[2000]: I0209 09:53:38.158679 2000 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d907f301-5b69-476e-b943-64c335d147f8\" (UniqueName: \"kubernetes.io/nfs/707bbcff-0fcd-40b9-9cf4-0dec7603f54f-pvc-d907f301-5b69-476e-b943-64c335d147f8\") pod \"test-pod-1\" (UID: \"707bbcff-0fcd-40b9-9cf4-0dec7603f54f\") " pod="default/test-pod-1" Feb 9 09:53:38.274000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.393205 kernel: Failed to create system directory netfs Feb 9 09:53:38.393252 kernel: audit: type=1400 audit(1707472418.274:344): avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.393270 kernel: Failed to create system directory netfs Feb 9 09:53:38.393282 kernel: audit: type=1400 audit(1707472418.274:344): avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.274000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.274000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.532438 kernel: Failed to create system directory netfs Feb 9 09:53:38.532468 kernel: audit: type=1400 audit(1707472418.274:344): avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.532485 kernel: Failed to create system directory netfs Feb 9 09:53:38.598289 kubelet[2000]: E0209 09:53:38.598251 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:38.618409 kernel: audit: type=1400 audit(1707472418.274:344): avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.274000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.274000 audit[13437]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555b6ee715e0 a1=153bc a2=555b6e0392b0 a3=5 items=0 ppid=872 pid=13437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:38.274000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 09:53:38.854764 kernel: audit: type=1300 audit(1707472418.274:344): arch=c000003e syscall=175 success=yes exit=0 a0=555b6ee715e0 a1=153bc a2=555b6e0392b0 a3=5 items=0 ppid=872 pid=13437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:38.854796 kernel: audit: type=1327 audit(1707472418.274:344): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.880486 kernel: Failed to create system directory fscache Feb 9 09:53:38.880515 kernel: audit: type=1400 audit(1707472418.732:345): avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.880531 kernel: Failed to create system directory fscache Feb 9 09:53:38.964940 kernel: audit: type=1400 audit(1707472418.732:345): avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.990114 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.098430 kernel: audit: type=1400 audit(1707472418.732:345): avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.098486 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.181483 kernel: audit: type=1400 audit(1707472418.732:345): avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.311774 kernel: Failed to create system directory fscache Feb 9 09:53:39.311801 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.335441 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.358686 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.381541 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.404032 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.425967 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.447525 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.468550 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.489409 kernel: Failed to create system directory fscache Feb 9 09:53:38.732000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:38.732000 audit[13437]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555b6f0869c0 a1=4c0fc a2=555b6e0392b0 a3=5 items=0 ppid=872 pid=13437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:38.732000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 09:53:39.529508 kernel: FS-Cache: Loaded Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.587497 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.587548 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.587578 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.598649 kubelet[2000]: E0209 09:53:39.598608 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:39.606453 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.624963 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.643331 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.661300 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.679168 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.696330 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.713650 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.730549 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.746751 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.762769 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.778664 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.794135 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.809217 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.823739 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.837901 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.864805 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.864834 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.877415 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.889606 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.901476 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.913077 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.924019 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.934809 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.945272 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.955279 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.964908 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.974385 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.983555 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.992215 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.000394 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.008443 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.016133 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.023348 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.030039 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.036591 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.043158 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.049701 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.056235 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.062752 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.069298 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.075878 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.082421 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.088954 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.095475 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.101886 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.108128 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.114383 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.120661 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.126901 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.133175 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.139433 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.145694 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.151926 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.158184 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.164438 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.170686 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.176913 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.183181 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.189453 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.195706 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.201949 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.208196 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.214465 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.220772 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.227051 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.233305 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.245914 kernel: Failed to create system directory sunrpc Feb 9 09:53:40.245967 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.252162 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.258399 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.264626 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.270864 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.277118 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.283365 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.289606 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.295853 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.302079 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.308330 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.314538 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.320812 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.327045 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.333285 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.339515 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.345745 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.352003 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.358258 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.364520 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.370780 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.377041 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.383285 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.389516 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.395762 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.401979 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.408226 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.414499 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.420711 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.426930 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.433156 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.439387 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.445592 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.451850 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.458069 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.464291 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.470559 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.476765 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.482995 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.489226 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.495454 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.501682 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.507899 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.514109 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.520354 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.526538 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.532792 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.539017 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.545239 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.551459 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.557692 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.563895 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.570152 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.576390 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.582694 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.588904 kernel: Failed to create system directory sunrpc Feb 9 09:53:39.541000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.599086 kubelet[2000]: E0209 09:53:40.599045 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:40.614902 kernel: RPC: Registered named UNIX socket transport module. Feb 9 09:53:40.614961 kernel: RPC: Registered udp transport module. Feb 9 09:53:40.614972 kernel: RPC: Registered tcp transport module. Feb 9 09:53:40.621167 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 09:53:39.541000 audit[13437]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555b6f0d2ad0 a1=1588c4 a2=555b6e0392b0 a3=5 items=6 ppid=872 pid=13437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:39.541000 audit: CWD cwd="/" Feb 9 09:53:39.541000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:53:39.541000 audit: PATH item=1 name=(null) inode=48574 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:53:39.541000 audit: PATH item=2 name=(null) inode=48574 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:53:39.541000 audit: PATH item=3 name=(null) inode=48575 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:53:39.541000 audit: PATH item=4 name=(null) inode=48574 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:53:39.541000 audit: PATH item=5 name=(null) inode=48576 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:53:39.541000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.662271 kernel: Failed to create system directory nfs Feb 9 09:53:40.662295 kernel: Failed to create system directory nfs Feb 9 09:53:40.662312 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.668911 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.675467 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.682087 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.688693 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.695273 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.701881 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.708471 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.715074 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.721655 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.728253 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.734846 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.741430 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.748027 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.754620 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.761207 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.767798 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.774380 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.781027 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.787677 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.794341 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.801037 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.807697 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.814320 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.820967 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.827624 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.834276 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.840929 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.847548 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.854223 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.860835 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.867430 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.874019 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.880593 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.887185 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.893796 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.900368 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.906954 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.913540 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.920134 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.926697 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.933273 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.939669 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.945935 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.952212 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.958384 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.964346 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.970302 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.976248 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.982159 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.988152 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.994102 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.000056 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.006008 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.011959 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.017913 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.023869 kernel: Failed to create system directory nfs Feb 9 09:53:40.644000 audit[13437]: AVC avc: denied { confidentiality } for pid=13437 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:40.644000 audit[13437]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555b6f275680 a1=e29dc a2=555b6e0392b0 a3=5 items=0 ppid=872 pid=13437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:40.644000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 09:53:41.045530 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.081776 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.081805 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.081818 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.088070 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.094411 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.100721 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.107035 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.113350 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.119706 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.126002 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.132367 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.138686 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.144993 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.151324 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.157631 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.163953 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.170278 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.176587 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.182927 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.189250 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.195537 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.201909 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.208249 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.214538 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.220954 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.227284 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.233622 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.239968 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.246304 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.252686 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.259011 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.265368 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.271707 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.278073 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.284406 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.290764 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.297109 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.303465 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.309819 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.316178 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.322534 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.328892 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.335252 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.341674 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.348148 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.354617 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.361057 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.373357 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.373423 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.379537 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.391773 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.391837 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.397930 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.404026 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.410115 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.416183 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.422273 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.428369 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.434444 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.440562 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.446715 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.452828 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.458939 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.464956 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.471017 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.477022 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.483053 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.489078 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.495108 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.501132 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.507117 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.513127 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.519141 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.525136 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.531117 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.537113 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.543087 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.549074 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.555057 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.561028 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.566985 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.572951 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.578911 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.584895 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.590844 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.596814 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.600099 kubelet[2000]: E0209 09:53:41.600060 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:41.602756 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.608704 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.614666 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.626564 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.626591 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.632488 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.638436 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.644355 kernel: Failed to create system directory nfs4 Feb 9 09:53:41.061000 audit[13445]: AVC avc: denied { confidentiality } for pid=13445 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.753238 kernel: NFS: Registering the id_resolver key type Feb 9 09:53:41.753273 kernel: Key type id_resolver registered Feb 9 09:53:41.753287 kernel: Key type id_legacy registered Feb 9 09:53:41.061000 audit[13445]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fc926f38010 a1=1d3cc4 a2=56424e0d92b0 a3=5 items=0 ppid=872 pid=13445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:41.061000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.778662 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.778695 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.778710 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.785273 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.791896 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.798514 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.805118 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.811750 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.818350 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.824975 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.831548 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.838178 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.844797 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.851429 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.858067 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.864694 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.871314 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.877941 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.891251 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.891276 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.897896 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.904547 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.911200 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.917841 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.924512 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.931201 kernel: Failed to create system directory rpcgss Feb 9 09:53:41.763000 audit[13466]: AVC avc: denied { confidentiality } for pid=13466 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 09:53:41.763000 audit[13466]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f83b3b9f010 a1=4f524 a2=560b8a7612b0 a3=5 items=0 ppid=872 pid=13466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:41.763000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 9 09:53:41.979331 nfsidmap[13475]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-c37e5c1643' Feb 9 09:53:42.057960 nfsidmap[13476]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-c37e5c1643' Feb 9 09:53:42.079000 audit[1630]: AVC avc: denied { watch_reads } for pid=1630 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=3776 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 09:53:42.079000 audit[1630]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=564726c5fbb0 a2=10 a3=aca5c17b15a62077 items=0 ppid=1 pid=1630 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:42.079000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 09:53:42.079000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=3776 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 09:53:42.079000 audit[1630]: AVC avc: denied { watch_reads } for pid=1630 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=3776 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 09:53:42.079000 audit[1630]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=564726c5fbb0 a2=10 a3=aca5c17b15a62077 items=0 ppid=1 pid=1630 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:42.079000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 09:53:42.079000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=3776 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 09:53:42.079000 audit[1630]: AVC avc: denied { watch_reads } for pid=1630 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=3776 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 09:53:42.079000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=3776 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 09:53:42.079000 audit[1630]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=564726c5fbb0 a2=10 a3=aca5c17b15a62077 items=0 ppid=1 pid=1630 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:42.079000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 09:53:42.164052 env[1550]: time="2024-02-09T09:53:42.163968856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:707bbcff-0fcd-40b9-9cf4-0dec7603f54f,Namespace:default,Attempt:0,}" Feb 9 09:53:42.332941 systemd-networkd[1407]: cali5ec59c6bf6e: Link UP Feb 9 09:53:42.351301 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:53:42.351340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 9 09:53:42.351415 systemd-networkd[1407]: cali5ec59c6bf6e: Gained carrier Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.235 [INFO][13478] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.80.11-k8s-test--pod--1-eth0 default 707bbcff-0fcd-40b9-9cf4-0dec7603f54f 2314 0 2024-02-09 09:51:55 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.67.80.11 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.80.11-k8s-test--pod--1-" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.235 [INFO][13478] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.80.11-k8s-test--pod--1-eth0" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.256 [INFO][13502] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" HandleID="k8s-pod-network.3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Workload="10.67.80.11-k8s-test--pod--1-eth0" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.276 [INFO][13502] ipam_plugin.go 268: Auto assigning IP ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" HandleID="k8s-pod-network.3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Workload="10.67.80.11-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eb320), Attrs:map[string]string{"namespace":"default", "node":"10.67.80.11", "pod":"test-pod-1", "timestamp":"2024-02-09 09:53:42.256896416 +0000 UTC"}, Hostname:"10.67.80.11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.276 [INFO][13502] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.276 [INFO][13502] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.276 [INFO][13502] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.80.11' Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.279 [INFO][13502] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" host="10.67.80.11" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.286 [INFO][13502] ipam.go 372: Looking up existing affinities for host host="10.67.80.11" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.294 [INFO][13502] ipam.go 489: Trying affinity for 192.168.5.64/26 host="10.67.80.11" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.298 [INFO][13502] ipam.go 155: Attempting to load block cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.303 [INFO][13502] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.64/26 host="10.67.80.11" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.303 [INFO][13502] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.64/26 handle="k8s-pod-network.3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" host="10.67.80.11" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.306 [INFO][13502] ipam.go 1682: Creating new handle: k8s-pod-network.3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254 Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.313 [INFO][13502] ipam.go 1203: Writing block in order to claim IPs block=192.168.5.64/26 handle="k8s-pod-network.3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" host="10.67.80.11" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.326 [INFO][13502] ipam.go 1216: Successfully claimed IPs: [192.168.5.71/26] block=192.168.5.64/26 handle="k8s-pod-network.3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" host="10.67.80.11" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.326 [INFO][13502] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.71/26] handle="k8s-pod-network.3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" host="10.67.80.11" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.326 [INFO][13502] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.326 [INFO][13502] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.5.71/26] IPv6=[] ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" HandleID="k8s-pod-network.3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Workload="10.67.80.11-k8s-test--pod--1-eth0" Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.330 [INFO][13478] k8s.go 385: Populated endpoint ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.80.11-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"707bbcff-0fcd-40b9-9cf4-0dec7603f54f", ResourceVersion:"2314", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:53:42.362035 env[1550]: 2024-02-09 09:53:42.330 [INFO][13478] k8s.go 386: Calico CNI using IPs: [192.168.5.71/32] ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.80.11-k8s-test--pod--1-eth0" Feb 9 09:53:42.362568 env[1550]: 2024-02-09 09:53:42.330 [INFO][13478] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.80.11-k8s-test--pod--1-eth0" Feb 9 09:53:42.362568 env[1550]: 2024-02-09 09:53:42.333 [INFO][13478] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.80.11-k8s-test--pod--1-eth0" Feb 9 09:53:42.362568 env[1550]: 2024-02-09 09:53:42.351 [INFO][13478] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.80.11-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.80.11-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"707bbcff-0fcd-40b9-9cf4-0dec7603f54f", ResourceVersion:"2314", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.80.11", ContainerID:"3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.5.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"7e:6f:40:52:23:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:53:42.362568 env[1550]: 2024-02-09 09:53:42.361 [INFO][13478] k8s.go 491: Wrote updated endpoint to datastore ContainerID="3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.80.11-k8s-test--pod--1-eth0" Feb 9 09:53:42.368998 env[1550]: time="2024-02-09T09:53:42.368963556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:53:42.368998 env[1550]: time="2024-02-09T09:53:42.368983992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:53:42.368998 env[1550]: time="2024-02-09T09:53:42.368991121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:53:42.369123 env[1550]: time="2024-02-09T09:53:42.369053821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254 pid=13536 runtime=io.containerd.runc.v2 Feb 9 09:53:42.369000 audit[13541]: NETFILTER_CFG table=filter:107 family=2 entries=42 op=nft_register_chain pid=13541 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:53:42.369000 audit[13541]: SYSCALL arch=c000003e syscall=46 success=yes exit=20236 a0=3 a1=7ffd07cefa90 a2=0 a3=7ffd07cefa7c items=0 ppid=12828 pid=13541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:53:42.369000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:53:42.429860 env[1550]: time="2024-02-09T09:53:42.429798200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:707bbcff-0fcd-40b9-9cf4-0dec7603f54f,Namespace:default,Attempt:0,} returns sandbox id \"3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254\"" Feb 9 09:53:42.430695 env[1550]: time="2024-02-09T09:53:42.430676180Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:53:42.601345 kubelet[2000]: E0209 09:53:42.601180 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:42.859737 env[1550]: time="2024-02-09T09:53:42.859661347Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:53:42.860270 env[1550]: time="2024-02-09T09:53:42.860256675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:53:42.861199 env[1550]: time="2024-02-09T09:53:42.861187560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:53:42.862138 env[1550]: time="2024-02-09T09:53:42.862109971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:53:42.862645 env[1550]: time="2024-02-09T09:53:42.862594526Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 09:53:42.863590 env[1550]: time="2024-02-09T09:53:42.863577162Z" level=info msg="CreateContainer within sandbox \"3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 09:53:42.867919 env[1550]: time="2024-02-09T09:53:42.867879005Z" level=info msg="CreateContainer within sandbox \"3c07b29ca0e5c3da839c741bec6a3fa762bc34981f4035c5f3cfc78f2846f254\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"847bfdab0cef57c31396bc3ac9e4e89dbfe86db5628960e5e2ee1108eb95de65\"" Feb 9 09:53:42.868200 env[1550]: time="2024-02-09T09:53:42.868144414Z" level=info msg="StartContainer for \"847bfdab0cef57c31396bc3ac9e4e89dbfe86db5628960e5e2ee1108eb95de65\"" Feb 9 09:53:42.904644 env[1550]: time="2024-02-09T09:53:42.904608768Z" level=info msg="StartContainer for \"847bfdab0cef57c31396bc3ac9e4e89dbfe86db5628960e5e2ee1108eb95de65\" returns successfully" Feb 9 09:53:42.929972 kubelet[2000]: I0209 09:53:42.929935 2000 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.22337192892486e+09 pod.CreationTimestamp="2024-02-09 09:51:55 +0000 UTC" firstStartedPulling="2024-02-09 09:53:42.430514847 +0000 UTC m=+318.289751139" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:53:42.929679155 +0000 UTC m=+318.788915441" watchObservedRunningTime="2024-02-09 09:53:42.929915115 +0000 UTC m=+318.789151398" Feb 9 09:53:43.602213 kubelet[2000]: E0209 09:53:43.602118 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:44.247797 systemd-networkd[1407]: cali5ec59c6bf6e: Gained IPv6LL Feb 9 09:53:44.378795 kubelet[2000]: E0209 09:53:44.378667 2000 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:44.603343 kubelet[2000]: E0209 09:53:44.603135 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:45.603884 kubelet[2000]: E0209 09:53:45.603772 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:53:46.604653 kubelet[2000]: E0209 09:53:46.604535 2000 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"