Feb 12 19:38:12.770796 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 19:38:12.770813 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:38:12.770822 kernel: BIOS-provided physical RAM map: Feb 12 19:38:12.770827 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:38:12.770832 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 12 19:38:12.770838 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 12 19:38:12.770844 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 12 19:38:12.770850 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 12 19:38:12.770855 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 12 19:38:12.770862 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 12 19:38:12.770873 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 12 19:38:12.770879 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 12 19:38:12.770884 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 12 19:38:12.770890 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 12 19:38:12.770896 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 12 19:38:12.770903 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 12 19:38:12.770909 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 12 19:38:12.770915 kernel: NX (Execute Disable) protection: active Feb 12 19:38:12.770921 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 12 19:38:12.770927 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 12 19:38:12.770932 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 12 19:38:12.770938 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 12 19:38:12.770943 kernel: extended physical RAM map: Feb 12 19:38:12.770949 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:38:12.770955 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 12 19:38:12.770962 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 12 19:38:12.770967 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 12 19:38:12.770973 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 12 19:38:12.770979 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 12 19:38:12.770985 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 12 19:38:12.770990 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Feb 12 19:38:12.770996 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Feb 12 19:38:12.771002 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Feb 12 19:38:12.771007 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Feb 12 19:38:12.771013 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Feb 12 19:38:12.771019 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 12 19:38:12.771025 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 12 19:38:12.771031 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 12 19:38:12.771037 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 12 19:38:12.771043 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 12 19:38:12.771051 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 12 19:38:12.771057 kernel: efi: EFI v2.70 by EDK II Feb 12 19:38:12.771064 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 12 19:38:12.771071 kernel: random: crng init done Feb 12 19:38:12.771077 kernel: SMBIOS 2.8 present. Feb 12 19:38:12.771083 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 12 19:38:12.771089 kernel: Hypervisor detected: KVM Feb 12 19:38:12.771095 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 19:38:12.771102 kernel: kvm-clock: cpu 0, msr 71faa001, primary cpu clock Feb 12 19:38:12.771108 kernel: kvm-clock: using sched offset of 3897483417 cycles Feb 12 19:38:12.771115 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 19:38:12.771121 kernel: tsc: Detected 2794.750 MHz processor Feb 12 19:38:12.771129 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:38:12.771135 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:38:12.771142 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 12 19:38:12.771148 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:38:12.771155 kernel: Using GB pages for direct mapping Feb 12 19:38:12.771161 kernel: Secure boot disabled Feb 12 19:38:12.771168 kernel: ACPI: Early table checksum verification disabled Feb 12 19:38:12.771174 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 12 19:38:12.771181 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:38:12.771188 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:38:12.771195 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:38:12.771201 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 12 19:38:12.771207 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:38:12.771214 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:38:12.771220 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:38:12.771226 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 12 19:38:12.771233 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 12 19:38:12.771239 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 12 19:38:12.771246 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 12 19:38:12.771253 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 12 19:38:12.771259 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 12 19:38:12.771265 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 12 19:38:12.771271 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 12 19:38:12.771278 kernel: No NUMA configuration found Feb 12 19:38:12.771284 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 12 19:38:12.771290 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 12 19:38:12.771297 kernel: Zone ranges: Feb 12 19:38:12.771304 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:38:12.771310 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 12 19:38:12.771317 kernel: Normal empty Feb 12 19:38:12.771323 kernel: Movable zone start for each node Feb 12 19:38:12.771354 kernel: Early memory node ranges Feb 12 19:38:12.771362 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 12 19:38:12.771369 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 12 19:38:12.771375 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 12 19:38:12.771381 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 12 19:38:12.771389 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 12 19:38:12.771395 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 12 19:38:12.771402 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 12 19:38:12.771408 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:38:12.771414 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 12 19:38:12.771421 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 12 19:38:12.771427 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:38:12.771433 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 12 19:38:12.771440 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 12 19:38:12.771447 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 12 19:38:12.771454 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 12 19:38:12.771460 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 19:38:12.771466 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:38:12.771473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 19:38:12.771479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 19:38:12.771486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:38:12.771492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 19:38:12.771498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 19:38:12.771506 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:38:12.771512 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 19:38:12.771518 kernel: TSC deadline timer available Feb 12 19:38:12.771525 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 19:38:12.771531 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 19:38:12.771537 kernel: kvm-guest: setup PV sched yield Feb 12 19:38:12.771544 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 12 19:38:12.771550 kernel: Booting paravirtualized kernel on KVM Feb 12 19:38:12.771557 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:38:12.771563 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 19:38:12.771571 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 19:38:12.771577 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 19:38:12.771587 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 19:38:12.771594 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 19:38:12.771601 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Feb 12 19:38:12.771608 kernel: kvm-guest: PV spinlocks enabled Feb 12 19:38:12.771614 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 19:38:12.771621 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 12 19:38:12.771628 kernel: Policy zone: DMA32 Feb 12 19:38:12.771636 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:38:12.771643 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:38:12.771651 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:38:12.771658 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:38:12.771667 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:38:12.771674 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166304K reserved, 0K cma-reserved) Feb 12 19:38:12.771683 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:38:12.771691 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:38:12.771698 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:38:12.771705 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:38:12.771712 kernel: rcu: RCU event tracing is enabled. Feb 12 19:38:12.771719 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:38:12.771726 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:38:12.771732 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:38:12.771739 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:38:12.771746 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:38:12.771754 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 19:38:12.771760 kernel: Console: colour dummy device 80x25 Feb 12 19:38:12.771767 kernel: printk: console [ttyS0] enabled Feb 12 19:38:12.771774 kernel: ACPI: Core revision 20210730 Feb 12 19:38:12.771780 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 19:38:12.771787 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:38:12.771794 kernel: x2apic enabled Feb 12 19:38:12.771801 kernel: Switched APIC routing to physical x2apic. Feb 12 19:38:12.771807 kernel: kvm-guest: setup PV IPIs Feb 12 19:38:12.771815 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 19:38:12.771822 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 19:38:12.771829 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 12 19:38:12.771836 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 19:38:12.771842 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 19:38:12.771849 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 19:38:12.771856 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:38:12.771863 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:38:12.771874 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:38:12.771882 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:38:12.771889 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 19:38:12.771896 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 19:38:12.771902 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 19:38:12.771909 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 19:38:12.771916 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:38:12.771923 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:38:12.771930 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:38:12.771938 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:38:12.771945 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 19:38:12.771951 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:38:12.771958 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:38:12.771965 kernel: LSM: Security Framework initializing Feb 12 19:38:12.771971 kernel: SELinux: Initializing. Feb 12 19:38:12.771978 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:38:12.771985 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:38:12.771992 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 19:38:12.771999 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 19:38:12.772006 kernel: ... version: 0 Feb 12 19:38:12.772013 kernel: ... bit width: 48 Feb 12 19:38:12.772019 kernel: ... generic registers: 6 Feb 12 19:38:12.772026 kernel: ... value mask: 0000ffffffffffff Feb 12 19:38:12.772033 kernel: ... max period: 00007fffffffffff Feb 12 19:38:12.772039 kernel: ... fixed-purpose events: 0 Feb 12 19:38:12.772046 kernel: ... event mask: 000000000000003f Feb 12 19:38:12.772052 kernel: signal: max sigframe size: 1776 Feb 12 19:38:12.772059 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:38:12.772067 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:38:12.772074 kernel: x86: Booting SMP configuration: Feb 12 19:38:12.772080 kernel: .... node #0, CPUs: #1 Feb 12 19:38:12.772087 kernel: kvm-clock: cpu 1, msr 71faa041, secondary cpu clock Feb 12 19:38:12.772094 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 19:38:12.772100 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Feb 12 19:38:12.772107 kernel: #2 Feb 12 19:38:12.772114 kernel: kvm-clock: cpu 2, msr 71faa081, secondary cpu clock Feb 12 19:38:12.772121 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 19:38:12.772128 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Feb 12 19:38:12.772135 kernel: #3 Feb 12 19:38:12.772141 kernel: kvm-clock: cpu 3, msr 71faa0c1, secondary cpu clock Feb 12 19:38:12.772148 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 19:38:12.772155 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Feb 12 19:38:12.772161 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:38:12.772168 kernel: smpboot: Max logical packages: 1 Feb 12 19:38:12.772175 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 12 19:38:12.772181 kernel: devtmpfs: initialized Feb 12 19:38:12.772189 kernel: x86/mm: Memory block size: 128MB Feb 12 19:38:12.772196 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 12 19:38:12.772203 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 12 19:38:12.772210 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 12 19:38:12.772217 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 12 19:38:12.772224 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 12 19:38:12.772231 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:38:12.772237 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:38:12.772244 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:38:12.772252 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:38:12.772259 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:38:12.772266 kernel: audit: type=2000 audit(1707766691.809:1): state=initialized audit_enabled=0 res=1 Feb 12 19:38:12.772272 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:38:12.772279 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:38:12.772286 kernel: cpuidle: using governor menu Feb 12 19:38:12.772293 kernel: ACPI: bus type PCI registered Feb 12 19:38:12.772300 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:38:12.772306 kernel: dca service started, version 1.12.1 Feb 12 19:38:12.772314 kernel: PCI: Using configuration type 1 for base access Feb 12 19:38:12.772321 kernel: PCI: Using configuration type 1 for extended access Feb 12 19:38:12.772328 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:38:12.772344 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:38:12.772351 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:38:12.772357 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:38:12.772364 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:38:12.772370 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:38:12.772377 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:38:12.772385 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:38:12.772392 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:38:12.772399 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:38:12.772405 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:38:12.772413 kernel: ACPI: Interpreter enabled Feb 12 19:38:12.772420 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 19:38:12.772435 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:38:12.772442 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:38:12.772449 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 19:38:12.772457 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:38:12.772564 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:38:12.772586 kernel: acpiphp: Slot [3] registered Feb 12 19:38:12.772593 kernel: acpiphp: Slot [4] registered Feb 12 19:38:12.772600 kernel: acpiphp: Slot [5] registered Feb 12 19:38:12.772606 kernel: acpiphp: Slot [6] registered Feb 12 19:38:12.772613 kernel: acpiphp: Slot [7] registered Feb 12 19:38:12.772620 kernel: acpiphp: Slot [8] registered Feb 12 19:38:12.772628 kernel: acpiphp: Slot [9] registered Feb 12 19:38:12.772635 kernel: acpiphp: Slot [10] registered Feb 12 19:38:12.772650 kernel: acpiphp: Slot [11] registered Feb 12 19:38:12.772657 kernel: acpiphp: Slot [12] registered Feb 12 19:38:12.772663 kernel: acpiphp: Slot [13] registered Feb 12 19:38:12.772670 kernel: acpiphp: Slot [14] registered Feb 12 19:38:12.772677 kernel: acpiphp: Slot [15] registered Feb 12 19:38:12.772683 kernel: acpiphp: Slot [16] registered Feb 12 19:38:12.772690 kernel: acpiphp: Slot [17] registered Feb 12 19:38:12.772702 kernel: acpiphp: Slot [18] registered Feb 12 19:38:12.772713 kernel: acpiphp: Slot [19] registered Feb 12 19:38:12.772719 kernel: acpiphp: Slot [20] registered Feb 12 19:38:12.772726 kernel: acpiphp: Slot [21] registered Feb 12 19:38:12.772733 kernel: acpiphp: Slot [22] registered Feb 12 19:38:12.772739 kernel: acpiphp: Slot [23] registered Feb 12 19:38:12.772746 kernel: acpiphp: Slot [24] registered Feb 12 19:38:12.772752 kernel: acpiphp: Slot [25] registered Feb 12 19:38:12.772768 kernel: acpiphp: Slot [26] registered Feb 12 19:38:12.772775 kernel: acpiphp: Slot [27] registered Feb 12 19:38:12.772783 kernel: acpiphp: Slot [28] registered Feb 12 19:38:12.772790 kernel: acpiphp: Slot [29] registered Feb 12 19:38:12.772796 kernel: acpiphp: Slot [30] registered Feb 12 19:38:12.772803 kernel: acpiphp: Slot [31] registered Feb 12 19:38:12.772810 kernel: PCI host bridge to bus 0000:00 Feb 12 19:38:12.772910 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 19:38:12.772986 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 19:38:12.773060 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 19:38:12.773126 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 19:38:12.773199 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 12 19:38:12.773271 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:38:12.773414 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 19:38:12.773503 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 19:38:12.773601 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 19:38:12.773699 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 19:38:12.773779 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 19:38:12.773849 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 19:38:12.773926 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 19:38:12.773993 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 19:38:12.774068 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 19:38:12.774135 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 12 19:38:12.774205 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 12 19:38:12.774279 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 19:38:12.774360 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 12 19:38:12.774433 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 12 19:38:12.774509 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 12 19:38:12.774577 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 12 19:38:12.774644 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 19:38:12.774723 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:38:12.774792 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 19:38:12.774863 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 12 19:38:12.774939 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 12 19:38:12.775030 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:38:12.775114 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 19:38:12.775185 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 12 19:38:12.775256 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 12 19:38:12.775342 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:38:12.775413 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 19:38:12.775481 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 12 19:38:12.775548 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 12 19:38:12.775615 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 12 19:38:12.775625 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 19:38:12.775635 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 19:38:12.775641 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 19:38:12.775648 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 19:38:12.775655 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 19:38:12.775662 kernel: iommu: Default domain type: Translated Feb 12 19:38:12.775669 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:38:12.775734 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 19:38:12.775801 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 19:38:12.775875 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 19:38:12.775886 kernel: vgaarb: loaded Feb 12 19:38:12.775893 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:38:12.775900 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:38:12.775907 kernel: PTP clock support registered Feb 12 19:38:12.775914 kernel: Registered efivars operations Feb 12 19:38:12.775920 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:38:12.775927 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 19:38:12.775934 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 12 19:38:12.775940 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 12 19:38:12.775949 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Feb 12 19:38:12.775955 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Feb 12 19:38:12.775962 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 12 19:38:12.775969 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 12 19:38:12.775975 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 19:38:12.775982 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 19:38:12.775989 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 19:38:12.775996 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:38:12.776003 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:38:12.776010 kernel: pnp: PnP ACPI init Feb 12 19:38:12.776087 kernel: pnp 00:02: [dma 2] Feb 12 19:38:12.776097 kernel: pnp: PnP ACPI: found 6 devices Feb 12 19:38:12.776104 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:38:12.776111 kernel: NET: Registered PF_INET protocol family Feb 12 19:38:12.776118 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:38:12.776125 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:38:12.776131 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:38:12.776140 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:38:12.776147 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:38:12.776154 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:38:12.776161 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:38:12.776168 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:38:12.776175 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:38:12.776181 kernel: NET: Registered PF_XDP protocol family Feb 12 19:38:12.776255 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 12 19:38:12.776385 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 12 19:38:12.776451 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 19:38:12.776512 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 19:38:12.776572 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 19:38:12.776631 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 19:38:12.776690 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 12 19:38:12.776757 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 19:38:12.776826 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 19:38:12.776905 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 19:38:12.776916 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:38:12.776923 kernel: Initialise system trusted keyrings Feb 12 19:38:12.776930 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:38:12.776937 kernel: Key type asymmetric registered Feb 12 19:38:12.776944 kernel: Asymmetric key parser 'x509' registered Feb 12 19:38:12.776951 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:38:12.776959 kernel: io scheduler mq-deadline registered Feb 12 19:38:12.776966 kernel: io scheduler kyber registered Feb 12 19:38:12.776974 kernel: io scheduler bfq registered Feb 12 19:38:12.776981 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:38:12.776989 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 19:38:12.776996 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 19:38:12.777003 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 19:38:12.777010 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:38:12.777017 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:38:12.777024 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 19:38:12.777031 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 19:38:12.777040 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 19:38:12.777123 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 19:38:12.777137 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 19:38:12.777220 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 19:38:12.777297 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T19:38:12 UTC (1707766692) Feb 12 19:38:12.777426 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 19:38:12.777438 kernel: efifb: probing for efifb Feb 12 19:38:12.777453 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 12 19:38:12.777461 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 12 19:38:12.777468 kernel: efifb: scrolling: redraw Feb 12 19:38:12.777475 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:38:12.777483 kernel: Console: switching to colour frame buffer device 160x50 Feb 12 19:38:12.777495 kernel: fb0: EFI VGA frame buffer device Feb 12 19:38:12.777508 kernel: pstore: Registered efi as persistent store backend Feb 12 19:38:12.777515 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:38:12.777522 kernel: Segment Routing with IPv6 Feb 12 19:38:12.777529 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:38:12.777536 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:38:12.777551 kernel: Key type dns_resolver registered Feb 12 19:38:12.777558 kernel: IPI shorthand broadcast: enabled Feb 12 19:38:12.777566 kernel: sched_clock: Marking stable (360411029, 93369378)->(474737081, -20956674) Feb 12 19:38:12.777573 kernel: registered taskstats version 1 Feb 12 19:38:12.777581 kernel: Loading compiled-in X.509 certificates Feb 12 19:38:12.777589 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 19:38:12.777596 kernel: Key type .fscrypt registered Feb 12 19:38:12.777602 kernel: Key type fscrypt-provisioning registered Feb 12 19:38:12.777610 kernel: pstore: Using crash dump compression: deflate Feb 12 19:38:12.777618 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:38:12.777625 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:38:12.777632 kernel: ima: No architecture policies found Feb 12 19:38:12.777639 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:38:12.777647 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:38:12.777654 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:38:12.777662 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:38:12.777670 kernel: Run /init as init process Feb 12 19:38:12.777679 kernel: with arguments: Feb 12 19:38:12.777687 kernel: /init Feb 12 19:38:12.777696 kernel: with environment: Feb 12 19:38:12.777703 kernel: HOME=/ Feb 12 19:38:12.777710 kernel: TERM=linux Feb 12 19:38:12.777717 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:38:12.777727 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:38:12.777736 systemd[1]: Detected virtualization kvm. Feb 12 19:38:12.777744 systemd[1]: Detected architecture x86-64. Feb 12 19:38:12.777752 systemd[1]: Running in initrd. Feb 12 19:38:12.777759 systemd[1]: No hostname configured, using default hostname. Feb 12 19:38:12.777766 systemd[1]: Hostname set to . Feb 12 19:38:12.777775 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:38:12.777783 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:38:12.777790 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:38:12.777797 systemd[1]: Reached target cryptsetup.target. Feb 12 19:38:12.777805 systemd[1]: Reached target paths.target. Feb 12 19:38:12.777812 systemd[1]: Reached target slices.target. Feb 12 19:38:12.777820 systemd[1]: Reached target swap.target. Feb 12 19:38:12.777827 systemd[1]: Reached target timers.target. Feb 12 19:38:12.777836 systemd[1]: Listening on iscsid.socket. Feb 12 19:38:12.777843 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:38:12.777851 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:38:12.777858 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:38:12.777874 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:38:12.777882 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:38:12.777889 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:38:12.777897 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:38:12.777905 systemd[1]: Reached target sockets.target. Feb 12 19:38:12.777914 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:38:12.777921 systemd[1]: Finished network-cleanup.service. Feb 12 19:38:12.777929 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:38:12.777936 systemd[1]: Starting systemd-journald.service... Feb 12 19:38:12.777944 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:38:12.777951 systemd[1]: Starting systemd-resolved.service... Feb 12 19:38:12.777959 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:38:12.777966 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:38:12.777974 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:38:12.777982 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:38:12.777990 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:38:12.777997 kernel: audit: type=1130 audit(1707766692.770:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.778005 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:38:12.778013 kernel: audit: type=1130 audit(1707766692.774:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.778021 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:38:12.778030 systemd-journald[197]: Journal started Feb 12 19:38:12.778068 systemd-journald[197]: Runtime Journal (/run/log/journal/ba368cefd7674311accd065a77b4b2e9) is 6.0M, max 48.4M, 42.4M free. Feb 12 19:38:12.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.767320 systemd-modules-load[198]: Inserted module 'overlay' Feb 12 19:38:12.780546 systemd[1]: Started systemd-journald.service. Feb 12 19:38:12.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.783353 kernel: audit: type=1130 audit(1707766692.780:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.786148 systemd-resolved[199]: Positive Trust Anchors: Feb 12 19:38:12.786325 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:38:12.786369 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:38:12.788444 systemd-resolved[199]: Defaulting to hostname 'linux'. Feb 12 19:38:12.789093 systemd[1]: Started systemd-resolved.service. Feb 12 19:38:12.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.789301 systemd[1]: Reached target nss-lookup.target. Feb 12 19:38:12.791604 kernel: audit: type=1130 audit(1707766692.788:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.797354 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:38:12.799110 systemd-modules-load[198]: Inserted module 'br_netfilter' Feb 12 19:38:12.799359 kernel: Bridge firewalling registered Feb 12 19:38:12.800297 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:38:12.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.801987 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:38:12.804324 kernel: audit: type=1130 audit(1707766692.800:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.809099 dracut-cmdline[215]: dracut-dracut-053 Feb 12 19:38:12.810647 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:38:12.815349 kernel: SCSI subsystem initialized Feb 12 19:38:12.825733 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:38:12.825762 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:38:12.825773 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:38:12.828305 systemd-modules-load[198]: Inserted module 'dm_multipath' Feb 12 19:38:12.828954 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:38:12.831959 kernel: audit: type=1130 audit(1707766692.828:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.829554 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:38:12.836528 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:38:12.839521 kernel: audit: type=1130 audit(1707766692.836:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.864362 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:38:12.875358 kernel: iscsi: registered transport (tcp) Feb 12 19:38:12.893350 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:38:12.893365 kernel: QLogic iSCSI HBA Driver Feb 12 19:38:12.920482 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:38:12.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.921753 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:38:12.924459 kernel: audit: type=1130 audit(1707766692.920:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:12.965350 kernel: raid6: avx2x4 gen() 30221 MB/s Feb 12 19:38:12.982346 kernel: raid6: avx2x4 xor() 7408 MB/s Feb 12 19:38:12.999345 kernel: raid6: avx2x2 gen() 32400 MB/s Feb 12 19:38:13.016345 kernel: raid6: avx2x2 xor() 19349 MB/s Feb 12 19:38:13.033345 kernel: raid6: avx2x1 gen() 26622 MB/s Feb 12 19:38:13.050345 kernel: raid6: avx2x1 xor() 15387 MB/s Feb 12 19:38:13.067345 kernel: raid6: sse2x4 gen() 14843 MB/s Feb 12 19:38:13.084344 kernel: raid6: sse2x4 xor() 7259 MB/s Feb 12 19:38:13.101344 kernel: raid6: sse2x2 gen() 16246 MB/s Feb 12 19:38:13.118348 kernel: raid6: sse2x2 xor() 8872 MB/s Feb 12 19:38:13.135351 kernel: raid6: sse2x1 gen() 10390 MB/s Feb 12 19:38:13.152789 kernel: raid6: sse2x1 xor() 7378 MB/s Feb 12 19:38:13.152803 kernel: raid6: using algorithm avx2x2 gen() 32400 MB/s Feb 12 19:38:13.152821 kernel: raid6: .... xor() 19349 MB/s, rmw enabled Feb 12 19:38:13.152830 kernel: raid6: using avx2x2 recovery algorithm Feb 12 19:38:13.166352 kernel: xor: automatically using best checksumming function avx Feb 12 19:38:13.264358 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:38:13.272108 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:38:13.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:13.272000 audit: BPF prog-id=7 op=LOAD Feb 12 19:38:13.274000 audit: BPF prog-id=8 op=LOAD Feb 12 19:38:13.275343 kernel: audit: type=1130 audit(1707766693.271:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:13.275600 systemd[1]: Starting systemd-udevd.service... Feb 12 19:38:13.287041 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 12 19:38:13.290796 systemd[1]: Started systemd-udevd.service. Feb 12 19:38:13.292163 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:38:13.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:13.301075 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 12 19:38:13.324223 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:38:13.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:13.326017 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:38:13.363871 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:38:13.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:13.390590 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:38:13.394648 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:38:13.394683 kernel: GPT:9289727 != 19775487 Feb 12 19:38:13.394699 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:38:13.394715 kernel: GPT:9289727 != 19775487 Feb 12 19:38:13.394724 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:38:13.394732 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:38:13.397355 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:38:13.407345 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:38:13.407367 kernel: AES CTR mode by8 optimization enabled Feb 12 19:38:13.408352 kernel: libata version 3.00 loaded. Feb 12 19:38:13.411350 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 19:38:13.412348 kernel: scsi host0: ata_piix Feb 12 19:38:13.415560 kernel: scsi host1: ata_piix Feb 12 19:38:13.415675 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 19:38:13.415686 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 19:38:13.423346 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Feb 12 19:38:13.424365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:38:13.424445 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:38:13.434180 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:38:13.440707 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:38:13.445637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:38:13.447507 systemd[1]: Starting disk-uuid.service... Feb 12 19:38:13.453157 disk-uuid[516]: Primary Header is updated. Feb 12 19:38:13.453157 disk-uuid[516]: Secondary Entries is updated. Feb 12 19:38:13.453157 disk-uuid[516]: Secondary Header is updated. Feb 12 19:38:13.455421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:38:13.461342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:38:13.570362 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 19:38:13.570400 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 19:38:13.600361 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 19:38:13.600507 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:38:13.617421 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 19:38:14.461834 disk-uuid[517]: The operation has completed successfully. Feb 12 19:38:14.462675 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:38:14.483473 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:38:14.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.483567 systemd[1]: Finished disk-uuid.service. Feb 12 19:38:14.490100 systemd[1]: Starting verity-setup.service... Feb 12 19:38:14.501357 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 19:38:14.520579 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:38:14.521969 systemd[1]: Finished verity-setup.service. Feb 12 19:38:14.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.523762 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:38:14.585251 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:38:14.586372 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:38:14.585477 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:38:14.586285 systemd[1]: Starting ignition-setup.service... Feb 12 19:38:14.587855 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:38:14.594636 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:38:14.594664 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:38:14.594678 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:38:14.601982 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:38:14.609247 systemd[1]: Finished ignition-setup.service. Feb 12 19:38:14.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.610025 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:38:14.648188 ignition[624]: Ignition 2.14.0 Feb 12 19:38:14.648198 ignition[624]: Stage: fetch-offline Feb 12 19:38:14.648263 ignition[624]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:38:14.648272 ignition[624]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:38:14.648389 ignition[624]: parsed url from cmdline: "" Feb 12 19:38:14.648392 ignition[624]: no config URL provided Feb 12 19:38:14.648397 ignition[624]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:38:14.648404 ignition[624]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:38:14.648423 ignition[624]: op(1): [started] loading QEMU firmware config module Feb 12 19:38:14.648427 ignition[624]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:38:14.652416 ignition[624]: op(1): [finished] loading QEMU firmware config module Feb 12 19:38:14.660401 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:38:14.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.662000 audit: BPF prog-id=9 op=LOAD Feb 12 19:38:14.662903 systemd[1]: Starting systemd-networkd.service... Feb 12 19:38:14.667165 ignition[624]: parsing config with SHA512: 797a83f8292ad423a8c82c24ad279889bbcc95979bb72d5fb7656133ac525a4cb3abe1730802841954b45a1c00a9956251340aeb8e8d669b4c32434ea79fea2f Feb 12 19:38:14.684096 systemd-networkd[711]: lo: Link UP Feb 12 19:38:14.684103 systemd-networkd[711]: lo: Gained carrier Feb 12 19:38:14.684471 systemd-networkd[711]: Enumeration completed Feb 12 19:38:14.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.684644 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:38:14.684757 systemd[1]: Started systemd-networkd.service. Feb 12 19:38:14.687628 ignition[624]: fetch-offline: fetch-offline passed Feb 12 19:38:14.685356 systemd-networkd[711]: eth0: Link UP Feb 12 19:38:14.687680 ignition[624]: Ignition finished successfully Feb 12 19:38:14.685359 systemd-networkd[711]: eth0: Gained carrier Feb 12 19:38:14.686642 systemd[1]: Reached target network.target. Feb 12 19:38:14.687142 unknown[624]: fetched base config from "system" Feb 12 19:38:14.687148 unknown[624]: fetched user config from "qemu" Feb 12 19:38:14.693319 systemd[1]: Starting iscsiuio.service... Feb 12 19:38:14.694510 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:38:14.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.695793 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:38:14.697420 systemd[1]: Starting ignition-kargs.service... Feb 12 19:38:14.698538 systemd[1]: Started iscsiuio.service. Feb 12 19:38:14.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.700125 systemd[1]: Starting iscsid.service... Feb 12 19:38:14.700684 systemd-networkd[711]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:38:14.702803 iscsid[717]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:38:14.702803 iscsid[717]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:38:14.702803 iscsid[717]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:38:14.702803 iscsid[717]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:38:14.702803 iscsid[717]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:38:14.702803 iscsid[717]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:38:14.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.705773 ignition[716]: Ignition 2.14.0 Feb 12 19:38:14.703876 systemd[1]: Started iscsid.service. Feb 12 19:38:14.705779 ignition[716]: Stage: kargs Feb 12 19:38:14.704667 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:38:14.705867 ignition[716]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:38:14.708382 systemd[1]: Finished ignition-kargs.service. Feb 12 19:38:14.705875 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:38:14.710063 systemd[1]: Starting ignition-disks.service... Feb 12 19:38:14.706710 ignition[716]: kargs: kargs passed Feb 12 19:38:14.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.714532 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:38:14.706740 ignition[716]: Ignition finished successfully Feb 12 19:38:14.718122 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:38:14.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.717086 ignition[727]: Ignition 2.14.0 Feb 12 19:38:14.718924 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:38:14.717091 ignition[727]: Stage: disks Feb 12 19:38:14.719598 systemd[1]: Reached target remote-fs.target. Feb 12 19:38:14.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.717182 ignition[727]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:38:14.721508 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:38:14.717190 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:38:14.722293 systemd[1]: Finished ignition-disks.service. Feb 12 19:38:14.718079 ignition[727]: disks: disks passed Feb 12 19:38:14.723525 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:38:14.718113 ignition[727]: Ignition finished successfully Feb 12 19:38:14.724270 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:38:14.724308 systemd[1]: Reached target local-fs.target. Feb 12 19:38:14.724606 systemd[1]: Reached target sysinit.target. Feb 12 19:38:14.724741 systemd[1]: Reached target basic.target. Feb 12 19:38:14.728323 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:38:14.729640 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:38:14.742530 systemd-fsck[744]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 19:38:14.748284 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:38:14.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.749983 systemd[1]: Mounting sysroot.mount... Feb 12 19:38:14.759295 systemd[1]: Mounted sysroot.mount. Feb 12 19:38:14.761004 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:38:14.760012 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:38:14.762030 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:38:14.763385 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:38:14.763440 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:38:14.763468 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:38:14.765937 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:38:14.767271 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:38:14.770759 initrd-setup-root[754]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:38:14.773652 initrd-setup-root[762]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:38:14.776152 initrd-setup-root[770]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:38:14.778703 initrd-setup-root[778]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:38:14.799535 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:38:14.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.801309 systemd[1]: Starting ignition-mount.service... Feb 12 19:38:14.802228 systemd[1]: Starting sysroot-boot.service... Feb 12 19:38:14.805932 bash[795]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:38:14.813039 ignition[796]: INFO : Ignition 2.14.0 Feb 12 19:38:14.813039 ignition[796]: INFO : Stage: mount Feb 12 19:38:14.814157 ignition[796]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:38:14.814157 ignition[796]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:38:14.816430 ignition[796]: INFO : mount: mount passed Feb 12 19:38:14.816979 ignition[796]: INFO : Ignition finished successfully Feb 12 19:38:14.818066 systemd[1]: Finished ignition-mount.service. Feb 12 19:38:14.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:14.818438 systemd[1]: Finished sysroot-boot.service. Feb 12 19:38:14.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:15.530926 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:38:15.536494 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Feb 12 19:38:15.536524 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:38:15.536533 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:38:15.537565 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:38:15.540246 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:38:15.541323 systemd[1]: Starting ignition-files.service... Feb 12 19:38:15.554133 ignition[826]: INFO : Ignition 2.14.0 Feb 12 19:38:15.554133 ignition[826]: INFO : Stage: files Feb 12 19:38:15.555290 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:38:15.555290 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:38:15.556681 ignition[826]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:38:15.557913 ignition[826]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:38:15.557913 ignition[826]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:38:15.560643 ignition[826]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:38:15.561653 ignition[826]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:38:15.562890 unknown[826]: wrote ssh authorized keys file for user: core Feb 12 19:38:15.563603 ignition[826]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:38:15.564611 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:38:15.564611 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:38:15.564611 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:38:15.564611 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 19:38:15.908117 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:38:16.020083 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 19:38:16.022406 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:38:16.022406 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:38:16.022406 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:38:16.184561 systemd-networkd[711]: eth0: Gained IPv6LL Feb 12 19:38:16.326322 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:38:16.397008 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 19:38:16.399112 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:38:16.400408 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:38:16.401564 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:38:16.627208 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:38:16.841972 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 19:38:16.843991 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:38:16.843991 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:38:16.843991 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:38:16.888780 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:38:17.337554 ignition[826]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 19:38:17.339813 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:38:17.339813 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:38:17.342344 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:38:17.342344 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:38:17.345092 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:38:17.346354 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:38:17.347536 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:38:17.348775 ignition[826]: INFO : files: op(b): [started] processing unit "coreos-metadata.service" Feb 12 19:38:17.349941 ignition[826]: INFO : files: op(b): op(c): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:38:17.351707 ignition[826]: INFO : files: op(b): op(c): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:38:17.351707 ignition[826]: INFO : files: op(b): [finished] processing unit "coreos-metadata.service" Feb 12 19:38:17.354188 ignition[826]: INFO : files: op(d): [started] processing unit "containerd.service" Feb 12 19:38:17.354188 ignition[826]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:38:17.357112 ignition[826]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:38:17.357112 ignition[826]: INFO : files: op(d): [finished] processing unit "containerd.service" Feb 12 19:38:17.357112 ignition[826]: INFO : files: op(f): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:38:17.357112 ignition[826]: INFO : files: op(f): op(10): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:38:17.362959 ignition[826]: INFO : files: op(f): op(10): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:38:17.362959 ignition[826]: INFO : files: op(f): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:38:17.362959 ignition[826]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Feb 12 19:38:17.362959 ignition[826]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:38:17.368621 ignition[826]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:38:17.368621 ignition[826]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Feb 12 19:38:17.368621 ignition[826]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:38:17.368621 ignition[826]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:38:17.386450 ignition[826]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:38:17.387988 ignition[826]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:38:17.387988 ignition[826]: INFO : files: op(15): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:38:17.387988 ignition[826]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:38:17.391484 ignition[826]: INFO : files: op(16): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:38:17.391484 ignition[826]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:38:17.393489 ignition[826]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:38:17.394674 ignition[826]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:38:17.395825 ignition[826]: INFO : files: files passed Feb 12 19:38:17.395825 ignition[826]: INFO : Ignition finished successfully Feb 12 19:38:17.397517 systemd[1]: Finished ignition-files.service. Feb 12 19:38:17.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.399601 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:38:17.399628 kernel: audit: type=1130 audit(1707766697.398:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.399809 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:38:17.403107 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:38:17.404794 initrd-setup-root-after-ignition[851]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:38:17.406120 initrd-setup-root-after-ignition[853]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:38:17.407641 systemd[1]: Starting ignition-quench.service... Feb 12 19:38:17.409126 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:38:17.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.410799 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:38:17.413557 kernel: audit: type=1130 audit(1707766697.410:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.410878 systemd[1]: Finished ignition-quench.service. Feb 12 19:38:17.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.414893 systemd[1]: Reached target ignition-complete.target. Feb 12 19:38:17.420398 kernel: audit: type=1130 audit(1707766697.414:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.420416 kernel: audit: type=1131 audit(1707766697.414:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.420903 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:38:17.430757 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:38:17.431511 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:38:17.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.432847 systemd[1]: Reached target initrd-fs.target. Feb 12 19:38:17.437737 kernel: audit: type=1130 audit(1707766697.432:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.437767 kernel: audit: type=1131 audit(1707766697.432:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.437736 systemd[1]: Reached target initrd.target. Feb 12 19:38:17.438922 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:38:17.440518 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:38:17.447984 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:38:17.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.449865 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:38:17.452517 kernel: audit: type=1130 audit(1707766697.448:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.458070 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:38:17.459227 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:38:17.460496 systemd[1]: Stopped target timers.target. Feb 12 19:38:17.461568 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:38:17.462257 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:38:17.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.463470 systemd[1]: Stopped target initrd.target. Feb 12 19:38:17.466304 kernel: audit: type=1131 audit(1707766697.463:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.466388 systemd[1]: Stopped target basic.target. Feb 12 19:38:17.467460 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:38:17.468699 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:38:17.469959 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:38:17.471216 systemd[1]: Stopped target remote-fs.target. Feb 12 19:38:17.472351 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:38:17.473676 systemd[1]: Stopped target sysinit.target. Feb 12 19:38:17.474782 systemd[1]: Stopped target local-fs.target. Feb 12 19:38:17.475879 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:38:17.477037 systemd[1]: Stopped target swap.target. Feb 12 19:38:17.478043 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:38:17.478765 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:38:17.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.479936 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:38:17.482874 kernel: audit: type=1131 audit(1707766697.479:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.482909 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:38:17.483612 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:38:17.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.484783 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:38:17.488019 kernel: audit: type=1131 audit(1707766697.484:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.484884 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:38:17.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.488143 systemd[1]: Stopped target paths.target. Feb 12 19:38:17.489159 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:38:17.490386 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:38:17.491213 systemd[1]: Stopped target slices.target. Feb 12 19:38:17.492544 systemd[1]: Stopped target sockets.target. Feb 12 19:38:17.493832 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:38:17.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.493959 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:38:17.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.495374 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:38:17.499815 iscsid[717]: iscsid shutting down. Feb 12 19:38:17.495485 systemd[1]: Stopped ignition-files.service. Feb 12 19:38:17.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.497760 systemd[1]: Stopping ignition-mount.service... Feb 12 19:38:17.505834 ignition[868]: INFO : Ignition 2.14.0 Feb 12 19:38:17.505834 ignition[868]: INFO : Stage: umount Feb 12 19:38:17.505834 ignition[868]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:38:17.505834 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:38:17.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.498582 systemd[1]: Stopping iscsid.service... Feb 12 19:38:17.514833 ignition[868]: INFO : umount: umount passed Feb 12 19:38:17.514833 ignition[868]: INFO : Ignition finished successfully Feb 12 19:38:17.499739 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:38:17.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.499869 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:38:17.501379 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:38:17.501444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:38:17.501570 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:38:17.501858 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:38:17.501955 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:38:17.503811 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:38:17.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.503928 systemd[1]: Stopped iscsid.service. Feb 12 19:38:17.506057 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:38:17.506158 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:38:17.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.508209 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:38:17.508274 systemd[1]: Stopped ignition-mount.service. Feb 12 19:38:17.510273 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:38:17.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.510297 systemd[1]: Closed iscsid.socket. Feb 12 19:38:17.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.511291 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:38:17.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.511322 systemd[1]: Stopped ignition-disks.service. Feb 12 19:38:17.512112 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:38:17.512141 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:38:17.512891 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:38:17.539000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:38:17.512918 systemd[1]: Stopped ignition-setup.service. Feb 12 19:38:17.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.513654 systemd[1]: Stopping iscsiuio.service... Feb 12 19:38:17.516512 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:38:17.516582 systemd[1]: Stopped iscsiuio.service. Feb 12 19:38:17.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.518554 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:38:17.518791 systemd[1]: Stopped target network.target. Feb 12 19:38:17.519923 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:38:17.519947 systemd[1]: Closed iscsiuio.socket. Feb 12 19:38:17.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.521470 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:38:17.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.522499 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:38:17.523728 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:38:17.523796 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:38:17.524731 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:38:17.524769 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:38:17.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:17.525392 systemd-networkd[711]: eth0: DHCPv6 lease lost Feb 12 19:38:17.556000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:38:17.526707 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:38:17.526784 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:38:17.528582 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:38:17.528616 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:38:17.530604 systemd[1]: Stopping network-cleanup.service... Feb 12 19:38:17.531248 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:38:17.531295 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:38:17.532735 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:38:17.532774 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:38:17.533351 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:38:17.565000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:38:17.565000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:38:17.565000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:38:17.533381 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:38:17.534357 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:38:17.566000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:38:17.566000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:38:17.536327 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:38:17.536790 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:38:17.536861 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:38:17.541220 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:38:17.541280 systemd[1]: Stopped network-cleanup.service. Feb 12 19:38:17.542996 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:38:17.543081 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:38:17.544818 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:38:17.544847 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:38:17.546013 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:38:17.546034 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:38:17.547062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:38:17.547093 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:38:17.548197 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:38:17.548225 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:38:17.549194 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:38:17.549224 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:38:17.549776 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:38:17.549823 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:38:17.549854 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:38:17.555441 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:38:17.555499 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:38:17.556539 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:38:17.583740 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Feb 12 19:38:17.557629 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:38:17.563208 systemd[1]: Switching root. Feb 12 19:38:17.584810 systemd-journald[197]: Journal stopped Feb 12 19:38:20.401888 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:38:20.401926 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:38:20.401938 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:38:20.401952 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:38:20.401961 kernel: SELinux: policy capability open_perms=1 Feb 12 19:38:20.401971 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:38:20.401982 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:38:20.401991 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:38:20.402002 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:38:20.402011 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:38:20.402025 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:38:20.402035 systemd[1]: Successfully loaded SELinux policy in 36.145ms. Feb 12 19:38:20.402051 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.147ms. Feb 12 19:38:20.402062 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:38:20.402072 systemd[1]: Detected virtualization kvm. Feb 12 19:38:20.402082 systemd[1]: Detected architecture x86-64. Feb 12 19:38:20.402093 systemd[1]: Detected first boot. Feb 12 19:38:20.402103 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:38:20.402113 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:38:20.402122 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:38:20.402132 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:38:20.402145 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:38:20.402156 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:38:20.402167 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:38:20.402179 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:38:20.402189 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:38:20.402199 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:38:20.402209 systemd[1]: Created slice system-getty.slice. Feb 12 19:38:20.402221 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:38:20.402231 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:38:20.402242 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:38:20.402251 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:38:20.402261 systemd[1]: Created slice user.slice. Feb 12 19:38:20.402272 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:38:20.402283 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:38:20.402293 systemd[1]: Set up automount boot.automount. Feb 12 19:38:20.402303 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:38:20.402313 systemd[1]: Reached target integritysetup.target. Feb 12 19:38:20.402323 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:38:20.402342 systemd[1]: Reached target remote-fs.target. Feb 12 19:38:20.402352 systemd[1]: Reached target slices.target. Feb 12 19:38:20.402364 systemd[1]: Reached target swap.target. Feb 12 19:38:20.402374 systemd[1]: Reached target torcx.target. Feb 12 19:38:20.402383 systemd[1]: Reached target veritysetup.target. Feb 12 19:38:20.402393 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:38:20.402404 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:38:20.402413 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:38:20.402423 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:38:20.402433 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:38:20.402442 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:38:20.402452 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:38:20.402463 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:38:20.402473 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:38:20.402483 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:38:20.402493 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:38:20.402502 systemd[1]: Mounting media.mount... Feb 12 19:38:20.402512 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:38:20.402522 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:38:20.402532 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:38:20.402541 systemd[1]: Mounting tmp.mount... Feb 12 19:38:20.402553 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:38:20.402563 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:38:20.402572 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:38:20.402582 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:38:20.402592 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:38:20.402602 systemd[1]: Starting modprobe@drm.service... Feb 12 19:38:20.402618 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:38:20.402628 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:38:20.402638 systemd[1]: Starting modprobe@loop.service... Feb 12 19:38:20.402649 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:38:20.402660 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:38:20.402669 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:38:20.402686 systemd[1]: Starting systemd-journald.service... Feb 12 19:38:20.402696 kernel: loop: module loaded Feb 12 19:38:20.402706 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:38:20.402716 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:38:20.402726 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:38:20.402735 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:38:20.402747 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:38:20.402756 kernel: fuse: init (API version 7.34) Feb 12 19:38:20.402766 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:38:20.402775 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:38:20.402785 systemd[1]: Mounted media.mount. Feb 12 19:38:20.402795 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:38:20.402805 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:38:20.402817 systemd[1]: Mounted tmp.mount. Feb 12 19:38:20.402832 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:38:20.402844 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:38:20.402854 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:38:20.402864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:38:20.402874 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:38:20.402885 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:38:20.402894 systemd[1]: Finished modprobe@drm.service. Feb 12 19:38:20.402904 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:38:20.402917 systemd-journald[1009]: Journal started Feb 12 19:38:20.402953 systemd-journald[1009]: Runtime Journal (/run/log/journal/ba368cefd7674311accd065a77b4b2e9) is 6.0M, max 48.4M, 42.4M free. Feb 12 19:38:20.330000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:38:20.330000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:38:20.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.400000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:38:20.400000 audit[1009]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffec91063a0 a2=4000 a3=7ffec910643c items=0 ppid=1 pid=1009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:20.400000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:38:20.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.405144 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:38:20.405170 systemd[1]: Started systemd-journald.service. Feb 12 19:38:20.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.406968 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:38:20.407115 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:38:20.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.407993 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:38:20.413420 systemd[1]: Finished modprobe@loop.service. Feb 12 19:38:20.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.414358 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:38:20.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.415193 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:38:20.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.416066 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:38:20.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.416976 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:38:20.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.417844 systemd[1]: Reached target network-pre.target. Feb 12 19:38:20.419283 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:38:20.421052 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:38:20.421618 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:38:20.422635 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:38:20.423970 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:38:20.424654 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:38:20.425372 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:38:20.426013 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:38:20.426754 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:38:20.428103 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:38:20.430887 systemd-journald[1009]: Time spent on flushing to /var/log/journal/ba368cefd7674311accd065a77b4b2e9 is 24.521ms for 1108 entries. Feb 12 19:38:20.430887 systemd-journald[1009]: System Journal (/var/log/journal/ba368cefd7674311accd065a77b4b2e9) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:38:20.467079 systemd-journald[1009]: Received client request to flush runtime journal. Feb 12 19:38:20.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.430788 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:38:20.432732 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:38:20.439414 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:38:20.468230 udevadm[1057]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:38:20.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.440250 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:38:20.443241 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:38:20.444129 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:38:20.445685 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:38:20.449508 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:38:20.450995 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:38:20.461387 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:38:20.467762 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:38:20.895119 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:38:20.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.896722 systemd[1]: Starting systemd-udevd.service... Feb 12 19:38:20.915948 systemd-udevd[1062]: Using default interface naming scheme 'v252'. Feb 12 19:38:20.926911 systemd[1]: Started systemd-udevd.service. Feb 12 19:38:20.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.929116 systemd[1]: Starting systemd-networkd.service... Feb 12 19:38:20.933810 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:38:20.958045 systemd[1]: Found device dev-ttyS0.device. Feb 12 19:38:20.969545 systemd[1]: Started systemd-userdbd.service. Feb 12 19:38:20.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:20.995967 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:38:21.005370 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 19:38:21.009590 kernel: ACPI: button: Power Button [PWRF] Feb 12 19:38:21.008603 systemd-networkd[1069]: lo: Link UP Feb 12 19:38:21.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.008610 systemd-networkd[1069]: lo: Gained carrier Feb 12 19:38:21.008954 systemd-networkd[1069]: Enumeration completed Feb 12 19:38:21.009054 systemd[1]: Started systemd-networkd.service. Feb 12 19:38:21.009560 systemd-networkd[1069]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:38:21.010648 systemd-networkd[1069]: eth0: Link UP Feb 12 19:38:21.010653 systemd-networkd[1069]: eth0: Gained carrier Feb 12 19:38:21.010000 audit[1074]: AVC avc: denied { confidentiality } for pid=1074 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:38:21.010000 audit[1074]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561f0c493ff0 a1=32194 a2=7f4a7bac5bc5 a3=5 items=108 ppid=1062 pid=1074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:21.010000 audit: CWD cwd="/" Feb 12 19:38:21.010000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=1 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=2 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=3 name=(null) inode=14844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=4 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=5 name=(null) inode=14845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=6 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=7 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=8 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=9 name=(null) inode=14847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=10 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=11 name=(null) inode=14848 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=12 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=13 name=(null) inode=14849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=14 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=15 name=(null) inode=14850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=16 name=(null) inode=14846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=17 name=(null) inode=14851 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=18 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=19 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=20 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=21 name=(null) inode=14853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=22 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=23 name=(null) inode=14854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=24 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=25 name=(null) inode=14855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=26 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=27 name=(null) inode=14856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=28 name=(null) inode=14852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=29 name=(null) inode=14857 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=30 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=31 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=32 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=33 name=(null) inode=14859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=34 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=35 name=(null) inode=14860 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=36 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=37 name=(null) inode=14861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=38 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=39 name=(null) inode=14862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=40 name=(null) inode=14858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=41 name=(null) inode=14863 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=42 name=(null) inode=14843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=43 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=44 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=45 name=(null) inode=14865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=46 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=47 name=(null) inode=14866 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=48 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=49 name=(null) inode=14867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=50 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=51 name=(null) inode=14868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=52 name=(null) inode=14864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=53 name=(null) inode=14869 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=55 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=56 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=57 name=(null) inode=14871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=58 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=59 name=(null) inode=14872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=60 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=61 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=62 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=63 name=(null) inode=14874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=64 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=65 name=(null) inode=14875 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=66 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=67 name=(null) inode=14876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=68 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=69 name=(null) inode=14877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=70 name=(null) inode=14873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=71 name=(null) inode=14878 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=72 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=73 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=74 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=75 name=(null) inode=14880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=76 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=77 name=(null) inode=14881 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=78 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=79 name=(null) inode=14882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=80 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=81 name=(null) inode=14883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=82 name=(null) inode=14879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=83 name=(null) inode=14884 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=84 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=85 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=86 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=87 name=(null) inode=14886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=88 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=89 name=(null) inode=14887 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=90 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=91 name=(null) inode=14888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=92 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=93 name=(null) inode=14889 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=94 name=(null) inode=14885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=95 name=(null) inode=14890 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=96 name=(null) inode=14870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=97 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=98 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=99 name=(null) inode=14892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=100 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=101 name=(null) inode=14893 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=102 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=103 name=(null) inode=14894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=104 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=105 name=(null) inode=14895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=106 name=(null) inode=14891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PATH item=107 name=(null) inode=14896 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:38:21.010000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:38:21.017414 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 12 19:38:21.024473 systemd-networkd[1069]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:38:21.032348 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 19:38:21.047354 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:38:21.093637 kernel: kvm: Nested Virtualization enabled Feb 12 19:38:21.093799 kernel: SVM: kvm: Nested Paging enabled Feb 12 19:38:21.093835 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 19:38:21.093898 kernel: SVM: Virtual GIF supported Feb 12 19:38:21.105349 kernel: EDAC MC: Ver: 3.0.0 Feb 12 19:38:21.125639 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:38:21.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.127226 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:38:21.133434 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:38:21.157950 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:38:21.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.158685 systemd[1]: Reached target cryptsetup.target. Feb 12 19:38:21.160140 systemd[1]: Starting lvm2-activation.service... Feb 12 19:38:21.163136 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:38:21.184950 systemd[1]: Finished lvm2-activation.service. Feb 12 19:38:21.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.185638 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:38:21.186247 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:38:21.186265 systemd[1]: Reached target local-fs.target. Feb 12 19:38:21.186844 systemd[1]: Reached target machines.target. Feb 12 19:38:21.188276 systemd[1]: Starting ldconfig.service... Feb 12 19:38:21.189005 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:38:21.189043 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:38:21.189753 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:38:21.191143 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:38:21.192851 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:38:21.193704 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:38:21.193737 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:38:21.194515 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:38:21.197036 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1104 (bootctl) Feb 12 19:38:21.197797 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:38:21.198944 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:38:21.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.204606 systemd-tmpfiles[1107]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:38:21.205580 systemd-tmpfiles[1107]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:38:21.207438 systemd-tmpfiles[1107]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:38:21.232185 systemd-fsck[1113]: fsck.fat 4.2 (2021-01-31) Feb 12 19:38:21.232185 systemd-fsck[1113]: /dev/vda1: 790 files, 115362/258078 clusters Feb 12 19:38:21.233451 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:38:21.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.236362 systemd[1]: Mounting boot.mount... Feb 12 19:38:21.253239 systemd[1]: Mounted boot.mount. Feb 12 19:38:21.546709 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:38:21.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.570935 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:38:21.571559 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:38:21.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.583544 ldconfig[1103]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:38:21.589045 systemd[1]: Finished ldconfig.service. Feb 12 19:38:21.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.598689 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:38:21.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.600634 systemd[1]: Starting audit-rules.service... Feb 12 19:38:21.602453 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:38:21.604177 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:38:21.606321 systemd[1]: Starting systemd-resolved.service... Feb 12 19:38:21.608471 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:38:21.610179 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:38:21.611616 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:38:21.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.612619 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:38:21.614000 audit[1132]: SYSTEM_BOOT pid=1132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.617086 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:38:21.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.622200 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:38:21.624041 systemd[1]: Starting systemd-update-done.service... Feb 12 19:38:21.631224 systemd[1]: Finished systemd-update-done.service. Feb 12 19:38:21.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:21.632000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:38:21.632000 audit[1148]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff20934e70 a2=420 a3=0 items=0 ppid=1122 pid=1148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:21.632000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:38:21.632926 augenrules[1148]: No rules Feb 12 19:38:21.633383 systemd[1]: Finished audit-rules.service. Feb 12 19:38:21.672616 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:38:21.673512 systemd[1]: Reached target time-set.target. Feb 12 19:38:21.673676 systemd-timesyncd[1131]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:38:21.673727 systemd-timesyncd[1131]: Initial clock synchronization to Mon 2024-02-12 19:38:21.366121 UTC. Feb 12 19:38:21.676281 systemd-resolved[1128]: Positive Trust Anchors: Feb 12 19:38:21.676291 systemd-resolved[1128]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:38:21.676318 systemd-resolved[1128]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:38:21.681957 systemd-resolved[1128]: Defaulting to hostname 'linux'. Feb 12 19:38:21.683177 systemd[1]: Started systemd-resolved.service. Feb 12 19:38:21.683818 systemd[1]: Reached target network.target. Feb 12 19:38:21.684322 systemd[1]: Reached target nss-lookup.target. Feb 12 19:38:21.684875 systemd[1]: Reached target sysinit.target. Feb 12 19:38:21.685443 systemd[1]: Started motdgen.path. Feb 12 19:38:21.685921 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:38:21.686746 systemd[1]: Started logrotate.timer. Feb 12 19:38:21.687297 systemd[1]: Started mdadm.timer. Feb 12 19:38:21.687743 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:38:21.688293 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:38:21.688317 systemd[1]: Reached target paths.target. Feb 12 19:38:21.688906 systemd[1]: Reached target timers.target. Feb 12 19:38:21.689606 systemd[1]: Listening on dbus.socket. Feb 12 19:38:21.691119 systemd[1]: Starting docker.socket... Feb 12 19:38:21.692312 systemd[1]: Listening on sshd.socket. Feb 12 19:38:21.692894 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:38:21.693117 systemd[1]: Listening on docker.socket. Feb 12 19:38:21.693664 systemd[1]: Reached target sockets.target. Feb 12 19:38:21.694231 systemd[1]: Reached target basic.target. Feb 12 19:38:21.694847 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:38:21.694882 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:38:21.694897 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:38:21.695788 systemd[1]: Starting containerd.service... Feb 12 19:38:21.697156 systemd[1]: Starting dbus.service... Feb 12 19:38:21.698344 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:38:21.700068 systemd[1]: Starting extend-filesystems.service... Feb 12 19:38:21.700975 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:38:21.701994 systemd[1]: Starting motdgen.service... Feb 12 19:38:21.703637 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:38:21.705035 jq[1160]: false Feb 12 19:38:21.705613 systemd[1]: Starting prepare-critools.service... Feb 12 19:38:21.707531 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:38:21.709396 systemd[1]: Starting sshd-keygen.service... Feb 12 19:38:21.712190 systemd[1]: Starting systemd-logind.service... Feb 12 19:38:21.712875 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:38:21.712930 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:38:21.714014 systemd[1]: Starting update-engine.service... Feb 12 19:38:21.715780 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:38:21.720703 extend-filesystems[1161]: Found sr0 Feb 12 19:38:21.720703 extend-filesystems[1161]: Found vda Feb 12 19:38:21.720703 extend-filesystems[1161]: Found vda1 Feb 12 19:38:21.720703 extend-filesystems[1161]: Found vda2 Feb 12 19:38:21.720703 extend-filesystems[1161]: Found vda3 Feb 12 19:38:21.720703 extend-filesystems[1161]: Found usr Feb 12 19:38:21.720703 extend-filesystems[1161]: Found vda4 Feb 12 19:38:21.720703 extend-filesystems[1161]: Found vda6 Feb 12 19:38:21.720703 extend-filesystems[1161]: Found vda7 Feb 12 19:38:21.720703 extend-filesystems[1161]: Found vda9 Feb 12 19:38:21.720703 extend-filesystems[1161]: Checking size of /dev/vda9 Feb 12 19:38:21.718066 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:38:21.734961 dbus-daemon[1159]: [system] SELinux support is enabled Feb 12 19:38:21.756586 jq[1179]: true Feb 12 19:38:21.718313 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:38:21.727284 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:38:21.756985 tar[1183]: ./ Feb 12 19:38:21.756985 tar[1183]: ./macvlan Feb 12 19:38:21.727611 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:38:21.757315 tar[1185]: crictl Feb 12 19:38:21.735412 systemd[1]: Started dbus.service. Feb 12 19:38:21.757629 jq[1186]: true Feb 12 19:38:21.739450 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:38:21.739731 systemd[1]: Finished motdgen.service. Feb 12 19:38:21.740824 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:38:21.740848 systemd[1]: Reached target system-config.target. Feb 12 19:38:21.741652 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:38:21.741668 systemd[1]: Reached target user-config.target. Feb 12 19:38:21.762181 extend-filesystems[1161]: Resized partition /dev/vda9 Feb 12 19:38:21.766464 extend-filesystems[1220]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:38:21.768442 env[1188]: time="2024-02-12T19:38:21.768407151Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:38:21.775358 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:38:21.782829 update_engine[1175]: I0212 19:38:21.782683 1175 main.cc:92] Flatcar Update Engine starting Feb 12 19:38:21.786494 tar[1183]: ./static Feb 12 19:38:21.787847 systemd[1]: Started update-engine.service. Feb 12 19:38:21.788028 update_engine[1175]: I0212 19:38:21.788008 1175 update_check_scheduler.cc:74] Next update check in 8m3s Feb 12 19:38:21.791678 systemd[1]: Started locksmithd.service. Feb 12 19:38:21.808180 systemd-logind[1174]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 19:38:21.825971 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:38:21.826012 env[1188]: time="2024-02-12T19:38:21.816631672Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:38:21.826012 env[1188]: time="2024-02-12T19:38:21.825822114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:38:21.808199 systemd-logind[1174]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:38:21.809516 systemd-logind[1174]: New seat seat0. Feb 12 19:38:21.811205 systemd[1]: Started systemd-logind.service. Feb 12 19:38:21.826477 extend-filesystems[1220]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:38:21.826477 extend-filesystems[1220]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:38:21.826477 extend-filesystems[1220]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:38:21.831346 extend-filesystems[1161]: Resized filesystem in /dev/vda9 Feb 12 19:38:21.833315 bash[1227]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.827225786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.827255832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.827530587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.827551516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.827567546Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.827580290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.827671692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.827919306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.828096248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:38:21.833443 env[1188]: time="2024-02-12T19:38:21.828115414Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:38:21.828959 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.828168663Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.828182870Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.832808673Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.832835133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.832852255Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.832899814Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.832919712Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.832980405Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.832998850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.833015882Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.833035619Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.833051980Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.833068501Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:38:21.833833 env[1188]: time="2024-02-12T19:38:21.833084691Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:38:21.829165 systemd[1]: Finished extend-filesystems.service. Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833174489Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833263737Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833736894Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833767872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833783191Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833826552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833843494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833858752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833873871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833888638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833903466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833917873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833931679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.833947128Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:38:21.834257 env[1188]: time="2024-02-12T19:38:21.834073535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.831502 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:38:21.841211 env[1188]: time="2024-02-12T19:38:21.834093653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.841211 env[1188]: time="2024-02-12T19:38:21.834108591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.841211 env[1188]: time="2024-02-12T19:38:21.834123549Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:38:21.841211 env[1188]: time="2024-02-12T19:38:21.834140731Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:38:21.841211 env[1188]: time="2024-02-12T19:38:21.834153565Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:38:21.841211 env[1188]: time="2024-02-12T19:38:21.834174855Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:38:21.841211 env[1188]: time="2024-02-12T19:38:21.834212786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:38:21.835375 systemd[1]: Started containerd.service. Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.834452275Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.834518269Z" level=info msg="Connect containerd service" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.834546943Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835015722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835240042Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835276240Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835318319Z" level=info msg="containerd successfully booted in 0.069440s" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835820130Z" level=info msg="Start subscribing containerd event" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835859043Z" level=info msg="Start recovering state" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835903416Z" level=info msg="Start event monitor" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835917693Z" level=info msg="Start snapshots syncer" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835924616Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:38:21.841467 env[1188]: time="2024-02-12T19:38:21.835931579Z" level=info msg="Start streaming server" Feb 12 19:38:21.851491 locksmithd[1228]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:38:21.855184 tar[1183]: ./vlan Feb 12 19:38:21.891713 tar[1183]: ./portmap Feb 12 19:38:21.925763 tar[1183]: ./host-local Feb 12 19:38:21.956279 tar[1183]: ./vrf Feb 12 19:38:21.989526 tar[1183]: ./bridge Feb 12 19:38:22.028237 tar[1183]: ./tuning Feb 12 19:38:22.057708 tar[1183]: ./firewall Feb 12 19:38:22.089706 tar[1183]: ./host-device Feb 12 19:38:22.117594 tar[1183]: ./sbr Feb 12 19:38:22.142953 tar[1183]: ./loopback Feb 12 19:38:22.155900 systemd[1]: Finished prepare-critools.service. Feb 12 19:38:22.167007 tar[1183]: ./dhcp Feb 12 19:38:22.229331 tar[1183]: ./ptp Feb 12 19:38:22.255943 tar[1183]: ./ipvlan Feb 12 19:38:22.281617 tar[1183]: ./bandwidth Feb 12 19:38:22.313561 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:38:22.759182 sshd_keygen[1180]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:38:22.776451 systemd-networkd[1069]: eth0: Gained IPv6LL Feb 12 19:38:22.778472 systemd[1]: Finished sshd-keygen.service. Feb 12 19:38:22.780669 systemd[1]: Starting issuegen.service... Feb 12 19:38:22.784802 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:38:22.785039 systemd[1]: Finished issuegen.service. Feb 12 19:38:22.787250 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:38:22.791265 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:38:22.793130 systemd[1]: Started getty@tty1.service. Feb 12 19:38:22.794597 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:38:22.795377 systemd[1]: Reached target getty.target. Feb 12 19:38:22.796023 systemd[1]: Reached target multi-user.target. Feb 12 19:38:22.797529 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:38:22.804688 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:38:22.804860 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:38:22.805651 systemd[1]: Startup finished in 5.463s (kernel) + 5.189s (userspace) = 10.652s. Feb 12 19:38:30.976225 systemd[1]: Created slice system-sshd.slice. Feb 12 19:38:30.977160 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:48700.service. Feb 12 19:38:31.016977 sshd[1268]: Accepted publickey for core from 10.0.0.1 port 48700 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:38:31.018111 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:38:31.025254 systemd-logind[1174]: New session 1 of user core. Feb 12 19:38:31.026132 systemd[1]: Created slice user-500.slice. Feb 12 19:38:31.026884 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:38:31.033395 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:38:31.034460 systemd[1]: Starting user@500.service... Feb 12 19:38:31.036879 (systemd)[1272]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:38:31.403379 systemd[1272]: Queued start job for default target default.target. Feb 12 19:38:31.403544 systemd[1272]: Reached target paths.target. Feb 12 19:38:31.403570 systemd[1272]: Reached target sockets.target. Feb 12 19:38:31.403586 systemd[1272]: Reached target timers.target. Feb 12 19:38:31.403599 systemd[1272]: Reached target basic.target. Feb 12 19:38:31.403639 systemd[1272]: Reached target default.target. Feb 12 19:38:31.403664 systemd[1272]: Startup finished in 362ms. Feb 12 19:38:31.403714 systemd[1]: Started user@500.service. Feb 12 19:38:31.404458 systemd[1]: Started session-1.scope. Feb 12 19:38:31.453007 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:48714.service. Feb 12 19:38:31.491631 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 48714 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:38:31.492631 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:38:31.496380 systemd-logind[1174]: New session 2 of user core. Feb 12 19:38:31.496958 systemd[1]: Started session-2.scope. Feb 12 19:38:31.550283 sshd[1282]: pam_unix(sshd:session): session closed for user core Feb 12 19:38:31.552433 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:48716.service. Feb 12 19:38:31.552798 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:48714.service: Deactivated successfully. Feb 12 19:38:31.553675 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:38:31.554112 systemd-logind[1174]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:38:31.554954 systemd-logind[1174]: Removed session 2. Feb 12 19:38:31.589612 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 48716 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:38:31.590650 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:38:31.594009 systemd-logind[1174]: New session 3 of user core. Feb 12 19:38:31.594715 systemd[1]: Started session-3.scope. Feb 12 19:38:31.642422 sshd[1288]: pam_unix(sshd:session): session closed for user core Feb 12 19:38:31.645160 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:48724.service. Feb 12 19:38:31.645716 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:48716.service: Deactivated successfully. Feb 12 19:38:31.646787 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:38:31.647278 systemd-logind[1174]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:38:31.648422 systemd-logind[1174]: Removed session 3. Feb 12 19:38:31.683023 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 48724 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:38:31.684036 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:38:31.687026 systemd-logind[1174]: New session 4 of user core. Feb 12 19:38:31.687665 systemd[1]: Started session-4.scope. Feb 12 19:38:31.737252 sshd[1295]: pam_unix(sshd:session): session closed for user core Feb 12 19:38:31.739086 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:48736.service. Feb 12 19:38:31.739890 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:48724.service: Deactivated successfully. Feb 12 19:38:31.740592 systemd-logind[1174]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:38:31.740631 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:38:31.741464 systemd-logind[1174]: Removed session 4. Feb 12 19:38:31.776188 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 48736 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:38:31.777062 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:38:31.779784 systemd-logind[1174]: New session 5 of user core. Feb 12 19:38:31.780426 systemd[1]: Started session-5.scope. Feb 12 19:38:31.833353 sudo[1307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 12 19:38:31.833514 sudo[1307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:38:31.844538 dbus-daemon[1159]: Эn\xc0qU: received setenforce notice (enforcing=-1723977296) Feb 12 19:38:31.846360 sudo[1307]: pam_unix(sudo:session): session closed for user root Feb 12 19:38:31.847848 sshd[1301]: pam_unix(sshd:session): session closed for user core Feb 12 19:38:31.849907 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:48746.service. Feb 12 19:38:31.850640 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:48736.service: Deactivated successfully. Feb 12 19:38:31.851495 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:38:31.851920 systemd-logind[1174]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:38:31.852608 systemd-logind[1174]: Removed session 5. Feb 12 19:38:31.887417 sshd[1309]: Accepted publickey for core from 10.0.0.1 port 48746 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:38:31.888339 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:38:31.891036 systemd-logind[1174]: New session 6 of user core. Feb 12 19:38:31.891664 systemd[1]: Started session-6.scope. Feb 12 19:38:31.941623 sudo[1316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 12 19:38:31.941783 sudo[1316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:38:31.944282 sudo[1316]: pam_unix(sudo:session): session closed for user root Feb 12 19:38:31.948006 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 12 19:38:31.948160 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:38:31.956104 systemd[1]: Stopping audit-rules.service... Feb 12 19:38:31.955000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:38:31.957312 auditctl[1319]: No rules Feb 12 19:38:31.957610 systemd[1]: audit-rules.service: Deactivated successfully. Feb 12 19:38:31.957738 kernel: kauditd_printk_skb: 209 callbacks suppressed Feb 12 19:38:31.957770 kernel: audit: type=1305 audit(1707766711.955:131): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:38:31.957808 systemd[1]: Stopped audit-rules.service. Feb 12 19:38:31.955000 audit[1319]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc54810030 a2=420 a3=0 items=0 ppid=1 pid=1319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:31.959107 systemd[1]: Starting audit-rules.service... Feb 12 19:38:31.961699 kernel: audit: type=1300 audit(1707766711.955:131): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc54810030 a2=420 a3=0 items=0 ppid=1 pid=1319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:31.961739 kernel: audit: type=1327 audit(1707766711.955:131): proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:38:31.955000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:38:31.962518 kernel: audit: type=1131 audit(1707766711.956:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.972547 augenrules[1337]: No rules Feb 12 19:38:31.973102 systemd[1]: Finished audit-rules.service. Feb 12 19:38:31.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.975564 sudo[1315]: pam_unix(sudo:session): session closed for user root Feb 12 19:38:31.974000 audit[1315]: USER_END pid=1315 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.976344 kernel: audit: type=1130 audit(1707766711.971:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.976373 kernel: audit: type=1106 audit(1707766711.974:134): pid=1315 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.976927 sshd[1309]: pam_unix(sshd:session): session closed for user core Feb 12 19:38:31.981204 kernel: audit: type=1104 audit(1707766711.974:135): pid=1315 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.974000 audit[1315]: CRED_DISP pid=1315 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.979062 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:48762.service. Feb 12 19:38:31.979548 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:48746.service: Deactivated successfully. Feb 12 19:38:31.980326 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:38:31.980784 systemd-logind[1174]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:38:31.981581 systemd-logind[1174]: Removed session 6. Feb 12 19:38:31.977000 audit[1309]: USER_END pid=1309 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:31.977000 audit[1309]: CRED_DISP pid=1309 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:31.986979 kernel: audit: type=1106 audit(1707766711.977:136): pid=1309 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:31.987024 kernel: audit: type=1104 audit(1707766711.977:137): pid=1309 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:31.987045 kernel: audit: type=1130 audit(1707766711.977:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:48762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:48762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:31.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.89:22-10.0.0.1:48746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:32.018000 audit[1342]: USER_ACCT pid=1342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:32.018839 sshd[1342]: Accepted publickey for core from 10.0.0.1 port 48762 ssh2: RSA SHA256:61J5tVZgtMsvFgBzlA318rHvk/8vx1tAF2anfHXiCnk Feb 12 19:38:32.018000 audit[1342]: CRED_ACQ pid=1342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:32.019000 audit[1342]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd78655140 a2=3 a3=0 items=0 ppid=1 pid=1342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:32.019000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:38:32.019610 sshd[1342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:38:32.022449 systemd-logind[1174]: New session 7 of user core. Feb 12 19:38:32.023089 systemd[1]: Started session-7.scope. Feb 12 19:38:32.024000 audit[1342]: USER_START pid=1342 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:32.025000 audit[1347]: CRED_ACQ pid=1347 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:32.072000 audit[1348]: USER_ACCT pid=1348 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:32.074000 sudo[1348]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:38:32.072000 audit[1348]: CRED_REFR pid=1348 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:32.074168 sudo[1348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:38:32.074000 audit[1348]: USER_START pid=1348 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:32.833622 systemd[1]: Reloading. Feb 12 19:38:32.886257 /usr/lib/systemd/system-generators/torcx-generator[1378]: time="2024-02-12T19:38:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:38:32.886280 /usr/lib/systemd/system-generators/torcx-generator[1378]: time="2024-02-12T19:38:32Z" level=info msg="torcx already run" Feb 12 19:38:32.949245 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:38:32.949261 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:38:32.964843 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:38:33.028477 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:38:33.033462 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:38:33.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:33.033940 systemd[1]: Reached target network-online.target. Feb 12 19:38:33.035398 systemd[1]: Started kubelet.service. Feb 12 19:38:33.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:33.044266 systemd[1]: Starting coreos-metadata.service... Feb 12 19:38:33.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:33.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:33.051556 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 12 19:38:33.051772 systemd[1]: Finished coreos-metadata.service. Feb 12 19:38:33.088765 kubelet[1426]: E0212 19:38:33.088645 1426 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:38:33.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:38:33.090497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:38:33.090629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:38:33.211876 systemd[1]: Stopped kubelet.service. Feb 12 19:38:33.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:33.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:33.226885 systemd[1]: Reloading. Feb 12 19:38:33.274179 /usr/lib/systemd/system-generators/torcx-generator[1497]: time="2024-02-12T19:38:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:38:33.274202 /usr/lib/systemd/system-generators/torcx-generator[1497]: time="2024-02-12T19:38:33Z" level=info msg="torcx already run" Feb 12 19:38:33.336022 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:38:33.336038 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:38:33.351950 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:38:33.420248 systemd[1]: Started kubelet.service. Feb 12 19:38:33.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:33.459127 kubelet[1544]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:38:33.459127 kubelet[1544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:38:33.459493 kubelet[1544]: I0212 19:38:33.459151 1544 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:38:33.461371 kubelet[1544]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:38:33.461371 kubelet[1544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:38:33.689966 kubelet[1544]: I0212 19:38:33.689872 1544 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:38:33.689966 kubelet[1544]: I0212 19:38:33.689902 1544 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:38:33.690109 kubelet[1544]: I0212 19:38:33.690097 1544 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:38:33.691622 kubelet[1544]: I0212 19:38:33.691596 1544 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:38:33.695349 kubelet[1544]: I0212 19:38:33.695313 1544 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:38:33.695679 kubelet[1544]: I0212 19:38:33.695656 1544 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:38:33.695733 kubelet[1544]: I0212 19:38:33.695713 1544 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:38:33.695853 kubelet[1544]: I0212 19:38:33.695739 1544 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:38:33.695853 kubelet[1544]: I0212 19:38:33.695749 1544 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:38:33.695853 kubelet[1544]: I0212 19:38:33.695834 1544 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:38:33.703575 kubelet[1544]: I0212 19:38:33.703550 1544 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:38:33.703575 kubelet[1544]: I0212 19:38:33.703578 1544 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:38:33.703768 kubelet[1544]: I0212 19:38:33.703608 1544 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:38:33.703768 kubelet[1544]: I0212 19:38:33.703624 1544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:38:33.703768 kubelet[1544]: E0212 19:38:33.703680 1544 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:33.703768 kubelet[1544]: E0212 19:38:33.703706 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:33.704404 kubelet[1544]: I0212 19:38:33.704376 1544 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:38:33.704687 kubelet[1544]: W0212 19:38:33.704662 1544 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:38:33.705042 kubelet[1544]: I0212 19:38:33.705016 1544 server.go:1186] "Started kubelet" Feb 12 19:38:33.705850 kubelet[1544]: E0212 19:38:33.705454 1544 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:38:33.705850 kubelet[1544]: E0212 19:38:33.705502 1544 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:38:33.705850 kubelet[1544]: I0212 19:38:33.705738 1544 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:38:33.704000 audit[1544]: AVC avc: denied { mac_admin } for pid=1544 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:38:33.704000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:38:33.704000 audit[1544]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ece630 a1=c000456258 a2=c000ece600 a3=25 items=0 ppid=1 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.704000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:38:33.705000 audit[1544]: AVC avc: denied { mac_admin } for pid=1544 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:38:33.705000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:38:33.705000 audit[1544]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000290f60 a1=c000456270 a2=c000ece6c0 a3=25 items=0 ppid=1 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.705000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:38:33.706675 kubelet[1544]: I0212 19:38:33.706350 1544 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 19:38:33.706675 kubelet[1544]: I0212 19:38:33.706396 1544 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 19:38:33.706675 kubelet[1544]: I0212 19:38:33.706465 1544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:38:33.706675 kubelet[1544]: I0212 19:38:33.706501 1544 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:38:33.706847 kubelet[1544]: I0212 19:38:33.706835 1544 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:38:33.707018 kubelet[1544]: I0212 19:38:33.706991 1544 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:38:33.726000 audit[1557]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.726000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe9e526230 a2=0 a3=7ffe9e52621c items=0 ppid=1544 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:38:33.728000 audit[1560]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.728000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffcf2c95c90 a2=0 a3=7ffcf2c95c7c items=0 ppid=1544 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:38:33.730817 kubelet[1544]: E0212 19:38:33.730774 1544 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:38:33.730884 kubelet[1544]: W0212 19:38:33.730844 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:38:33.730884 kubelet[1544]: E0212 19:38:33.730870 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:38:33.731274 kubelet[1544]: W0212 19:38:33.731237 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:38:33.731274 kubelet[1544]: E0212 19:38:33.731279 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:38:33.731501 kubelet[1544]: W0212 19:38:33.731479 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:38:33.731501 kubelet[1544]: E0212 19:38:33.731503 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:38:33.732314 kubelet[1544]: E0212 19:38:33.730936 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c7299f3d63", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 704988003, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 704988003, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:33.733435 kubelet[1544]: E0212 19:38:33.733365 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c729a6e811", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 705490449, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 705490449, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:33.735786 kubelet[1544]: I0212 19:38:33.735744 1544 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:38:33.735786 kubelet[1544]: I0212 19:38:33.735776 1544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:38:33.735786 kubelet[1544]: I0212 19:38:33.735790 1544 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:38:33.736639 kubelet[1544]: E0212 19:38:33.736556 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b690cb0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734991024, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734991024, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:33.737551 kubelet[1544]: E0212 19:38:33.737508 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b6920ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734996139, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734996139, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:33.738460 kubelet[1544]: I0212 19:38:33.738440 1544 policy_none.go:49] "None policy: Start" Feb 12 19:38:33.738603 kubelet[1544]: E0212 19:38:33.738522 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b692c6b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734999147, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734999147, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:33.738987 kubelet[1544]: I0212 19:38:33.738949 1544 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:38:33.738987 kubelet[1544]: I0212 19:38:33.738975 1544 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:38:33.744145 kubelet[1544]: I0212 19:38:33.744119 1544 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:38:33.742000 audit[1544]: AVC avc: denied { mac_admin } for pid=1544 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:38:33.742000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:38:33.742000 audit[1544]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00076a6f0 a1=c00122b6f8 a2=c00076a6c0 a3=25 items=0 ppid=1 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.742000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:38:33.745669 kubelet[1544]: I0212 19:38:33.745450 1544 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 19:38:33.745733 kubelet[1544]: I0212 19:38:33.745707 1544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:38:33.745997 kubelet[1544]: E0212 19:38:33.745923 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72c034742", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 745098562, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 745098562, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:33.746354 kubelet[1544]: E0212 19:38:33.746312 1544 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.89\" not found" Feb 12 19:38:33.736000 audit[1564]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.736000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcbe5f2ed0 a2=0 a3=7ffcbe5f2ebc items=0 ppid=1544 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:38:33.750000 audit[1571]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.750000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffea6e9b4a0 a2=0 a3=7ffea6e9b48c items=0 ppid=1544 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:38:33.779000 audit[1576]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.779000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff2f6e5f70 a2=0 a3=7fff2f6e5f5c items=0 ppid=1544 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 12 19:38:33.779000 audit[1577]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.779000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd89bdd550 a2=0 a3=7ffd89bdd53c items=0 ppid=1544 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:38:33.782000 audit[1580]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.782000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdc595d5a0 a2=0 a3=7ffdc595d58c items=0 ppid=1544 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.782000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:38:33.784000 audit[1583]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.784000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff471349e0 a2=0 a3=7fff471349cc items=0 ppid=1544 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:38:33.785000 audit[1584]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.785000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd52301340 a2=0 a3=7ffd5230132c items=0 ppid=1544 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:38:33.786000 audit[1585]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.786000 audit[1585]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9b4512b0 a2=0 a3=7ffe9b45129c items=0 ppid=1544 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:38:33.787000 audit[1587]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.787000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffda9190290 a2=0 a3=7ffda919027c items=0 ppid=1544 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:38:33.808260 kubelet[1544]: I0212 19:38:33.808234 1544 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:38:33.789000 audit[1589]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.789000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd2c138be0 a2=0 a3=7ffd2c138bcc items=0 ppid=1544 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:38:33.809724 kubelet[1544]: E0212 19:38:33.809701 1544 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:38:33.809887 kubelet[1544]: E0212 19:38:33.809799 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b690cb0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734991024, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 808181441, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b690cb0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:33.809000 audit[1592]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.809000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc58677640 a2=0 a3=7ffc5867762c items=0 ppid=1544 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:38:33.810709 kubelet[1544]: E0212 19:38:33.810552 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b6920ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734996139, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 808195611, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b6920ab" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:33.811613 kubelet[1544]: E0212 19:38:33.811554 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b692c6b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734999147, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 808199106, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b692c6b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:33.810000 audit[1594]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.810000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffcae7a75a0 a2=0 a3=7ffcae7a758c items=0 ppid=1544 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:38:33.816000 audit[1597]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.816000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffea4dc76f0 a2=0 a3=7ffea4dc76dc items=0 ppid=1544 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:38:33.818153 kubelet[1544]: I0212 19:38:33.818137 1544 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:38:33.817000 audit[1598]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.817000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffed2b4cfa0 a2=0 a3=7ffed2b4cf8c items=0 ppid=1544 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:38:33.817000 audit[1599]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.817000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb675b8d0 a2=0 a3=7fffb675b8bc items=0 ppid=1544 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:38:33.818000 audit[1600]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.818000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffd9aa6c60 a2=0 a3=7fffd9aa6c4c items=0 ppid=1544 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:38:33.818000 audit[1601]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.818000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc4f43c60 a2=0 a3=7ffcc4f43c4c items=0 ppid=1544 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:38:33.819000 audit[1603]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:33.819000 audit[1603]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc04a18120 a2=0 a3=7ffc04a1810c items=0 ppid=1544 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:38:33.820000 audit[1604]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1604 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.820000 audit[1604]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe45480820 a2=0 a3=7ffe4548080c items=0 ppid=1544 pid=1604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:38:33.820000 audit[1605]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1605 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.820000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd3bbb37f0 a2=0 a3=7ffd3bbb37dc items=0 ppid=1544 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:38:33.822000 audit[1607]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1607 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.822000 audit[1607]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe818568c0 a2=0 a3=7ffe818568ac items=0 ppid=1544 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:38:33.823000 audit[1608]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.823000 audit[1608]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc2f024f60 a2=0 a3=7ffc2f024f4c items=0 ppid=1544 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.823000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:38:33.824000 audit[1609]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.824000 audit[1609]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd70ee29b0 a2=0 a3=7ffd70ee299c items=0 ppid=1544 pid=1609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:38:33.825000 audit[1611]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1611 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.825000 audit[1611]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd8543dd70 a2=0 a3=7ffd8543dd5c items=0 ppid=1544 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:38:33.827000 audit[1613]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1613 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.827000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffde4d13a20 a2=0 a3=7ffde4d13a0c items=0 ppid=1544 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.827000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:38:33.829000 audit[1615]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1615 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.829000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd66538d50 a2=0 a3=7ffd66538d3c items=0 ppid=1544 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.829000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:38:33.830000 audit[1617]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1617 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.830000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe18cfced0 a2=0 a3=7ffe18cfcebc items=0 ppid=1544 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:38:33.832000 audit[1619]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1619 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.832000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fff76813250 a2=0 a3=7fff7681323c items=0 ppid=1544 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.832000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:38:33.834545 kubelet[1544]: I0212 19:38:33.834521 1544 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:38:33.834545 kubelet[1544]: I0212 19:38:33.834538 1544 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:38:33.834595 kubelet[1544]: I0212 19:38:33.834558 1544 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:38:33.834618 kubelet[1544]: E0212 19:38:33.834601 1544 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:38:33.833000 audit[1620]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1620 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.833000 audit[1620]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd150c7bc0 a2=0 a3=7ffd150c7bac items=0 ppid=1544 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:38:33.835386 kubelet[1544]: W0212 19:38:33.835320 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:38:33.835386 kubelet[1544]: E0212 19:38:33.835349 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:38:33.834000 audit[1621]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1621 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.834000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff827806c0 a2=0 a3=7fff827806ac items=0 ppid=1544 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:38:33.835000 audit[1622]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1622 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:33.835000 audit[1622]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc57772ce0 a2=0 a3=7ffc57772ccc items=0 ppid=1544 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:33.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:38:33.932139 kubelet[1544]: E0212 19:38:33.932091 1544 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:38:34.011210 kubelet[1544]: I0212 19:38:34.011176 1544 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:38:34.012445 kubelet[1544]: E0212 19:38:34.012344 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b690cb0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734991024, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 34, 11119580, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b690cb0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:34.012660 kubelet[1544]: E0212 19:38:34.012552 1544 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:38:34.013252 kubelet[1544]: E0212 19:38:34.013163 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b6920ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734996139, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 34, 11132305, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b6920ab" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:34.106905 kubelet[1544]: E0212 19:38:34.106798 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b692c6b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734999147, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 34, 11136927, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b692c6b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:34.334827 kubelet[1544]: E0212 19:38:34.334703 1544 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:38:34.413634 kubelet[1544]: I0212 19:38:34.413588 1544 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:38:34.414735 kubelet[1544]: E0212 19:38:34.414708 1544 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:38:34.414871 kubelet[1544]: E0212 19:38:34.414790 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b690cb0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734991024, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 34, 413543325, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b690cb0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:34.507254 kubelet[1544]: E0212 19:38:34.507180 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b6920ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734996139, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 34, 413552441, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b6920ab" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:34.548535 kubelet[1544]: W0212 19:38:34.548515 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:38:34.548535 kubelet[1544]: E0212 19:38:34.548538 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:38:34.704288 kubelet[1544]: E0212 19:38:34.704173 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:34.707580 kubelet[1544]: E0212 19:38:34.707496 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b692c6b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734999147, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 34, 413555264, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b692c6b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:34.813480 kubelet[1544]: W0212 19:38:34.813433 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:38:34.813480 kubelet[1544]: E0212 19:38:34.813459 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:38:35.044063 kubelet[1544]: W0212 19:38:35.044032 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:38:35.044190 kubelet[1544]: E0212 19:38:35.044068 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:38:35.136396 kubelet[1544]: E0212 19:38:35.136358 1544 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:38:35.215440 kubelet[1544]: I0212 19:38:35.215402 1544 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:38:35.216220 kubelet[1544]: E0212 19:38:35.216196 1544 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:38:35.216543 kubelet[1544]: E0212 19:38:35.216472 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b690cb0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734991024, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 35, 215362817, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b690cb0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:35.217189 kubelet[1544]: E0212 19:38:35.217139 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b6920ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734996139, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 35, 215371424, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b6920ab" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:35.278468 kubelet[1544]: W0212 19:38:35.278439 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:38:35.278468 kubelet[1544]: E0212 19:38:35.278467 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:38:35.306323 kubelet[1544]: E0212 19:38:35.306181 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b692c6b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734999147, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 35, 215374648, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b692c6b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:35.704887 kubelet[1544]: E0212 19:38:35.704766 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:36.705031 kubelet[1544]: E0212 19:38:36.704965 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:36.737899 kubelet[1544]: E0212 19:38:36.737868 1544 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:38:36.815302 kubelet[1544]: W0212 19:38:36.815282 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:38:36.815302 kubelet[1544]: E0212 19:38:36.815306 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:38:36.816830 kubelet[1544]: I0212 19:38:36.816804 1544 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:38:36.817704 kubelet[1544]: E0212 19:38:36.817689 1544 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:38:36.817834 kubelet[1544]: E0212 19:38:36.817732 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b690cb0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734991024, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 36, 816774333, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b690cb0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:36.818564 kubelet[1544]: E0212 19:38:36.818518 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b6920ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734996139, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 36, 816783138, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b6920ab" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:36.819324 kubelet[1544]: E0212 19:38:36.819275 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b692c6b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734999147, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 36, 816785519, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b692c6b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:37.326280 kubelet[1544]: W0212 19:38:37.326226 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:38:37.326280 kubelet[1544]: E0212 19:38:37.326265 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:38:37.527789 kubelet[1544]: W0212 19:38:37.527752 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:38:37.527789 kubelet[1544]: E0212 19:38:37.527781 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:38:37.705497 kubelet[1544]: E0212 19:38:37.705363 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:37.779541 kubelet[1544]: W0212 19:38:37.779512 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:38:37.779541 kubelet[1544]: E0212 19:38:37.779544 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:38:38.705946 kubelet[1544]: E0212 19:38:38.705898 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:39.706624 kubelet[1544]: E0212 19:38:39.706567 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:39.939391 kubelet[1544]: E0212 19:38:39.939354 1544 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:38:40.019178 kubelet[1544]: I0212 19:38:40.019150 1544 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:38:40.020441 kubelet[1544]: E0212 19:38:40.020409 1544 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:38:40.020500 kubelet[1544]: E0212 19:38:40.020413 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b690cb0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734991024, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 40, 19112071, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b690cb0" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:40.021164 kubelet[1544]: E0212 19:38:40.021118 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b6920ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734996139, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 40, 19122724, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b6920ab" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:40.021775 kubelet[1544]: E0212 19:38:40.021729 1544 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b334c72b692c6b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 38, 33, 734999147, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 38, 40, 19126358, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b334c72b692c6b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:38:40.707209 kubelet[1544]: E0212 19:38:40.707164 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:40.958394 kubelet[1544]: W0212 19:38:40.958243 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:38:40.958394 kubelet[1544]: E0212 19:38:40.958279 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:38:41.450941 kubelet[1544]: W0212 19:38:41.450894 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:38:41.450941 kubelet[1544]: E0212 19:38:41.450932 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:38:41.707980 kubelet[1544]: E0212 19:38:41.707855 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:41.740877 kubelet[1544]: W0212 19:38:41.740837 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:38:41.740877 kubelet[1544]: E0212 19:38:41.740870 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:38:42.149188 kubelet[1544]: W0212 19:38:42.149151 1544 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:38:42.149188 kubelet[1544]: E0212 19:38:42.149186 1544 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:38:42.708186 kubelet[1544]: E0212 19:38:42.708141 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:43.692204 kubelet[1544]: I0212 19:38:43.692140 1544 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:38:43.708446 kubelet[1544]: E0212 19:38:43.708421 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:43.747438 kubelet[1544]: E0212 19:38:43.747395 1544 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.89\" not found" Feb 12 19:38:44.080103 kubelet[1544]: E0212 19:38:44.080074 1544 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.89" not found Feb 12 19:38:44.709438 kubelet[1544]: E0212 19:38:44.709396 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:45.134691 kubelet[1544]: E0212 19:38:45.134653 1544 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.89" not found Feb 12 19:38:45.709991 kubelet[1544]: E0212 19:38:45.709943 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:46.343216 kubelet[1544]: E0212 19:38:46.343185 1544 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.89\" not found" node="10.0.0.89" Feb 12 19:38:46.421016 kubelet[1544]: I0212 19:38:46.420984 1544 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:38:46.536010 kubelet[1544]: I0212 19:38:46.535978 1544 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.89" Feb 12 19:38:46.711019 kubelet[1544]: E0212 19:38:46.710899 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:46.729581 kubelet[1544]: E0212 19:38:46.729534 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:46.830227 kubelet[1544]: E0212 19:38:46.830190 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:46.930688 kubelet[1544]: E0212 19:38:46.930653 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.031327 kubelet[1544]: E0212 19:38:47.031301 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.131168 sudo[1348]: pam_unix(sudo:session): session closed for user root Feb 12 19:38:47.129000 audit[1348]: USER_END pid=1348 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:47.131545 kubelet[1544]: E0212 19:38:47.131378 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.131813 kernel: kauditd_printk_skb: 130 callbacks suppressed Feb 12 19:38:47.131868 kernel: audit: type=1106 audit(1707766727.129:192): pid=1348 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:47.132229 sshd[1342]: pam_unix(sshd:session): session closed for user core Feb 12 19:38:47.134054 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:48762.service: Deactivated successfully. Feb 12 19:38:47.129000 audit[1348]: CRED_DISP pid=1348 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:47.134773 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:38:47.135662 systemd-logind[1174]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:38:47.136313 systemd-logind[1174]: Removed session 7. Feb 12 19:38:47.136535 kernel: audit: type=1104 audit(1707766727.129:193): pid=1348 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:38:47.136596 kernel: audit: type=1106 audit(1707766727.131:194): pid=1342 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:47.131000 audit[1342]: USER_END pid=1342 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:47.131000 audit[1342]: CRED_DISP pid=1342 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:47.141767 kernel: audit: type=1104 audit(1707766727.131:195): pid=1342 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:38:47.141839 kernel: audit: type=1131 audit(1707766727.133:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:48762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:47.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:48762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:38:47.231895 kubelet[1544]: E0212 19:38:47.231842 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.332420 kubelet[1544]: E0212 19:38:47.332284 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.432714 kubelet[1544]: E0212 19:38:47.432677 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.533070 kubelet[1544]: E0212 19:38:47.533047 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.633490 kubelet[1544]: E0212 19:38:47.633425 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.712070 kubelet[1544]: E0212 19:38:47.712022 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:47.734351 kubelet[1544]: E0212 19:38:47.734304 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.834891 kubelet[1544]: E0212 19:38:47.834859 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:47.935208 kubelet[1544]: E0212 19:38:47.935126 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.035784 kubelet[1544]: E0212 19:38:48.035722 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.136283 kubelet[1544]: E0212 19:38:48.136235 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.237037 kubelet[1544]: E0212 19:38:48.236935 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.337231 kubelet[1544]: E0212 19:38:48.337198 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.437684 kubelet[1544]: E0212 19:38:48.437643 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.538199 kubelet[1544]: E0212 19:38:48.538171 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.638629 kubelet[1544]: E0212 19:38:48.638586 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.712253 kubelet[1544]: E0212 19:38:48.712210 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:48.739435 kubelet[1544]: E0212 19:38:48.739396 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.839809 kubelet[1544]: E0212 19:38:48.839702 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:48.940132 kubelet[1544]: E0212 19:38:48.940091 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.040706 kubelet[1544]: E0212 19:38:49.040666 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.141284 kubelet[1544]: E0212 19:38:49.141154 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.241824 kubelet[1544]: E0212 19:38:49.241777 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.342329 kubelet[1544]: E0212 19:38:49.342284 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.442994 kubelet[1544]: E0212 19:38:49.442871 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.543380 kubelet[1544]: E0212 19:38:49.543341 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.643755 kubelet[1544]: E0212 19:38:49.643727 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.712514 kubelet[1544]: E0212 19:38:49.712435 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:49.744665 kubelet[1544]: E0212 19:38:49.744640 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.845345 kubelet[1544]: E0212 19:38:49.845297 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:49.945599 kubelet[1544]: E0212 19:38:49.945567 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:50.046168 kubelet[1544]: E0212 19:38:50.046112 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:50.146718 kubelet[1544]: E0212 19:38:50.146660 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:50.247323 kubelet[1544]: E0212 19:38:50.247281 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:50.347863 kubelet[1544]: E0212 19:38:50.347716 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:50.448302 kubelet[1544]: E0212 19:38:50.448249 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:50.549392 kubelet[1544]: E0212 19:38:50.549355 1544 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:38:50.651094 kubelet[1544]: I0212 19:38:50.650788 1544 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:38:50.651159 env[1188]: time="2024-02-12T19:38:50.651019948Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:38:50.651415 kubelet[1544]: I0212 19:38:50.651187 1544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:38:50.713152 kubelet[1544]: E0212 19:38:50.713127 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:50.713152 kubelet[1544]: I0212 19:38:50.713148 1544 apiserver.go:52] "Watching apiserver" Feb 12 19:38:50.714842 kubelet[1544]: I0212 19:38:50.714821 1544 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:38:50.714901 kubelet[1544]: I0212 19:38:50.714885 1544 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:38:50.714928 kubelet[1544]: I0212 19:38:50.714911 1544 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:38:50.715062 kubelet[1544]: E0212 19:38:50.715020 1544 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:38:50.807994 kubelet[1544]: I0212 19:38:50.807962 1544 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:38:50.882494 kubelet[1544]: I0212 19:38:50.882463 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/279e61a1-636f-4855-8fb5-870f12bbdc1a-tigera-ca-bundle\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.882560 kubelet[1544]: I0212 19:38:50.882505 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdpt4\" (UniqueName: \"kubernetes.io/projected/285c6db4-31c3-48f8-ba5c-36202e194f38-kube-api-access-pdpt4\") pod \"csi-node-driver-726pv\" (UID: \"285c6db4-31c3-48f8-ba5c-36202e194f38\") " pod="calico-system/csi-node-driver-726pv" Feb 12 19:38:50.882560 kubelet[1544]: I0212 19:38:50.882525 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/280973c3-46d3-4e6b-bd0e-04ad6964c1fe-xtables-lock\") pod \"kube-proxy-fzhqv\" (UID: \"280973c3-46d3-4e6b-bd0e-04ad6964c1fe\") " pod="kube-system/kube-proxy-fzhqv" Feb 12 19:38:50.882625 kubelet[1544]: I0212 19:38:50.882562 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/279e61a1-636f-4855-8fb5-870f12bbdc1a-lib-modules\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.882668 kubelet[1544]: I0212 19:38:50.882644 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/285c6db4-31c3-48f8-ba5c-36202e194f38-registration-dir\") pod \"csi-node-driver-726pv\" (UID: \"285c6db4-31c3-48f8-ba5c-36202e194f38\") " pod="calico-system/csi-node-driver-726pv" Feb 12 19:38:50.882742 kubelet[1544]: I0212 19:38:50.882718 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hpfr\" (UniqueName: \"kubernetes.io/projected/280973c3-46d3-4e6b-bd0e-04ad6964c1fe-kube-api-access-4hpfr\") pod \"kube-proxy-fzhqv\" (UID: \"280973c3-46d3-4e6b-bd0e-04ad6964c1fe\") " pod="kube-system/kube-proxy-fzhqv" Feb 12 19:38:50.882771 kubelet[1544]: I0212 19:38:50.882764 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/279e61a1-636f-4855-8fb5-870f12bbdc1a-policysync\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.882802 kubelet[1544]: I0212 19:38:50.882792 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/285c6db4-31c3-48f8-ba5c-36202e194f38-varrun\") pod \"csi-node-driver-726pv\" (UID: \"285c6db4-31c3-48f8-ba5c-36202e194f38\") " pod="calico-system/csi-node-driver-726pv" Feb 12 19:38:50.882841 kubelet[1544]: I0212 19:38:50.882828 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/285c6db4-31c3-48f8-ba5c-36202e194f38-kubelet-dir\") pod \"csi-node-driver-726pv\" (UID: \"285c6db4-31c3-48f8-ba5c-36202e194f38\") " pod="calico-system/csi-node-driver-726pv" Feb 12 19:38:50.882892 kubelet[1544]: I0212 19:38:50.882862 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/285c6db4-31c3-48f8-ba5c-36202e194f38-socket-dir\") pod \"csi-node-driver-726pv\" (UID: \"285c6db4-31c3-48f8-ba5c-36202e194f38\") " pod="calico-system/csi-node-driver-726pv" Feb 12 19:38:50.882928 kubelet[1544]: I0212 19:38:50.882919 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/279e61a1-636f-4855-8fb5-870f12bbdc1a-cni-bin-dir\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.882953 kubelet[1544]: I0212 19:38:50.882950 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/279e61a1-636f-4855-8fb5-870f12bbdc1a-cni-net-dir\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.882976 kubelet[1544]: I0212 19:38:50.882973 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/279e61a1-636f-4855-8fb5-870f12bbdc1a-node-certs\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.883012 kubelet[1544]: I0212 19:38:50.883006 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/279e61a1-636f-4855-8fb5-870f12bbdc1a-var-run-calico\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.883040 kubelet[1544]: I0212 19:38:50.883034 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/279e61a1-636f-4855-8fb5-870f12bbdc1a-var-lib-calico\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.883067 kubelet[1544]: I0212 19:38:50.883060 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/279e61a1-636f-4855-8fb5-870f12bbdc1a-cni-log-dir\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.883136 kubelet[1544]: I0212 19:38:50.883118 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/279e61a1-636f-4855-8fb5-870f12bbdc1a-flexvol-driver-host\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.883179 kubelet[1544]: I0212 19:38:50.883166 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/280973c3-46d3-4e6b-bd0e-04ad6964c1fe-kube-proxy\") pod \"kube-proxy-fzhqv\" (UID: \"280973c3-46d3-4e6b-bd0e-04ad6964c1fe\") " pod="kube-system/kube-proxy-fzhqv" Feb 12 19:38:50.883208 kubelet[1544]: I0212 19:38:50.883199 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/280973c3-46d3-4e6b-bd0e-04ad6964c1fe-lib-modules\") pod \"kube-proxy-fzhqv\" (UID: \"280973c3-46d3-4e6b-bd0e-04ad6964c1fe\") " pod="kube-system/kube-proxy-fzhqv" Feb 12 19:38:50.883231 kubelet[1544]: I0212 19:38:50.883222 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/279e61a1-636f-4855-8fb5-870f12bbdc1a-xtables-lock\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.883265 kubelet[1544]: I0212 19:38:50.883246 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7g2c\" (UniqueName: \"kubernetes.io/projected/279e61a1-636f-4855-8fb5-870f12bbdc1a-kube-api-access-d7g2c\") pod \"calico-node-dbtbd\" (UID: \"279e61a1-636f-4855-8fb5-870f12bbdc1a\") " pod="calico-system/calico-node-dbtbd" Feb 12 19:38:50.883290 kubelet[1544]: I0212 19:38:50.883273 1544 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:38:50.986765 kubelet[1544]: E0212 19:38:50.986696 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:50.986899 kubelet[1544]: W0212 19:38:50.986882 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:50.986978 kubelet[1544]: E0212 19:38:50.986964 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:50.989140 kubelet[1544]: E0212 19:38:50.989118 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:50.989140 kubelet[1544]: W0212 19:38:50.989136 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:50.989251 kubelet[1544]: E0212 19:38:50.989162 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.052605 kubelet[1544]: E0212 19:38:51.052577 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.052605 kubelet[1544]: W0212 19:38:51.052597 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.052605 kubelet[1544]: E0212 19:38:51.052618 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.084611 kubelet[1544]: E0212 19:38:51.084577 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.084611 kubelet[1544]: W0212 19:38:51.084599 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.084611 kubelet[1544]: E0212 19:38:51.084612 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.084778 kubelet[1544]: E0212 19:38:51.084773 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.084800 kubelet[1544]: W0212 19:38:51.084779 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.084800 kubelet[1544]: E0212 19:38:51.084789 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.185924 kubelet[1544]: E0212 19:38:51.185905 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.185924 kubelet[1544]: W0212 19:38:51.185919 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.186035 kubelet[1544]: E0212 19:38:51.185936 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.186107 kubelet[1544]: E0212 19:38:51.186094 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.186107 kubelet[1544]: W0212 19:38:51.186103 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.186107 kubelet[1544]: E0212 19:38:51.186111 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.286800 kubelet[1544]: E0212 19:38:51.286773 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.286800 kubelet[1544]: W0212 19:38:51.286789 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.286800 kubelet[1544]: E0212 19:38:51.286804 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.287027 kubelet[1544]: E0212 19:38:51.286984 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.287027 kubelet[1544]: W0212 19:38:51.286994 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.287027 kubelet[1544]: E0212 19:38:51.287006 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.370830 kubelet[1544]: E0212 19:38:51.370805 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.370969 kubelet[1544]: W0212 19:38:51.370825 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.370969 kubelet[1544]: E0212 19:38:51.370868 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.388050 kubelet[1544]: E0212 19:38:51.388028 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.388050 kubelet[1544]: W0212 19:38:51.388041 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.388158 kubelet[1544]: E0212 19:38:51.388055 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.452521 kubelet[1544]: E0212 19:38:51.452498 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:51.452521 kubelet[1544]: W0212 19:38:51.452518 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:51.452674 kubelet[1544]: E0212 19:38:51.452539 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:51.617887 kubelet[1544]: E0212 19:38:51.617779 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:38:51.618505 env[1188]: time="2024-02-12T19:38:51.618463526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fzhqv,Uid:280973c3-46d3-4e6b-bd0e-04ad6964c1fe,Namespace:kube-system,Attempt:0,}" Feb 12 19:38:51.618851 kubelet[1544]: E0212 19:38:51.618834 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:38:51.619424 env[1188]: time="2024-02-12T19:38:51.619389381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbtbd,Uid:279e61a1-636f-4855-8fb5-870f12bbdc1a,Namespace:calico-system,Attempt:0,}" Feb 12 19:38:51.713708 kubelet[1544]: E0212 19:38:51.713685 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:52.447972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4271645666.mount: Deactivated successfully. Feb 12 19:38:52.454503 env[1188]: time="2024-02-12T19:38:52.454467528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:52.455234 env[1188]: time="2024-02-12T19:38:52.455208319Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:52.457920 env[1188]: time="2024-02-12T19:38:52.457893327Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:52.459148 env[1188]: time="2024-02-12T19:38:52.459122332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:52.460396 env[1188]: time="2024-02-12T19:38:52.460373493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:52.461658 env[1188]: time="2024-02-12T19:38:52.461632985Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:52.462937 env[1188]: time="2024-02-12T19:38:52.462916314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:52.464322 env[1188]: time="2024-02-12T19:38:52.464296387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:52.480525 env[1188]: time="2024-02-12T19:38:52.480374125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:38:52.480525 env[1188]: time="2024-02-12T19:38:52.480414713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:38:52.480525 env[1188]: time="2024-02-12T19:38:52.480427798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:38:52.480794 env[1188]: time="2024-02-12T19:38:52.480761611Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bd7bedba19d738880546ac8d442d2e1f7ad0a4d2f3f3b24bcf30719a7fb3477 pid=1651 runtime=io.containerd.runc.v2 Feb 12 19:38:52.486762 env[1188]: time="2024-02-12T19:38:52.486625036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:38:52.486762 env[1188]: time="2024-02-12T19:38:52.486655982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:38:52.486762 env[1188]: time="2024-02-12T19:38:52.486665563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:38:52.486947 env[1188]: time="2024-02-12T19:38:52.486789089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7045a681bde6c9ecc74dfa27dd74cbdd84da7a94b90d3c009ebcf4c51174de5 pid=1668 runtime=io.containerd.runc.v2 Feb 12 19:38:52.513326 env[1188]: time="2024-02-12T19:38:52.512904712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fzhqv,Uid:280973c3-46d3-4e6b-bd0e-04ad6964c1fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bd7bedba19d738880546ac8d442d2e1f7ad0a4d2f3f3b24bcf30719a7fb3477\"" Feb 12 19:38:52.514274 kubelet[1544]: E0212 19:38:52.513819 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:38:52.515023 env[1188]: time="2024-02-12T19:38:52.514997453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:38:52.518045 env[1188]: time="2024-02-12T19:38:52.518024933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbtbd,Uid:279e61a1-636f-4855-8fb5-870f12bbdc1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7045a681bde6c9ecc74dfa27dd74cbdd84da7a94b90d3c009ebcf4c51174de5\"" Feb 12 19:38:52.518662 kubelet[1544]: E0212 19:38:52.518551 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:38:52.714152 kubelet[1544]: E0212 19:38:52.714036 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:52.835649 kubelet[1544]: E0212 19:38:52.835616 1544 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:38:53.704524 kubelet[1544]: E0212 19:38:53.704478 1544 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:53.714708 kubelet[1544]: E0212 19:38:53.714680 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:53.983688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760420591.mount: Deactivated successfully. Feb 12 19:38:54.650830 env[1188]: time="2024-02-12T19:38:54.650779326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:54.656262 env[1188]: time="2024-02-12T19:38:54.656230340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:54.659823 env[1188]: time="2024-02-12T19:38:54.659799219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:54.662055 env[1188]: time="2024-02-12T19:38:54.662014388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:54.662438 env[1188]: time="2024-02-12T19:38:54.662414224Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 19:38:54.663128 env[1188]: time="2024-02-12T19:38:54.663090888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 12 19:38:54.664110 env[1188]: time="2024-02-12T19:38:54.664068011Z" level=info msg="CreateContainer within sandbox \"6bd7bedba19d738880546ac8d442d2e1f7ad0a4d2f3f3b24bcf30719a7fb3477\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:38:54.677486 env[1188]: time="2024-02-12T19:38:54.677440730Z" level=info msg="CreateContainer within sandbox \"6bd7bedba19d738880546ac8d442d2e1f7ad0a4d2f3f3b24bcf30719a7fb3477\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"073afd4707ae95c34493f1360022a7cfae0850d0e3d7e1bb53450bf9d22db498\"" Feb 12 19:38:54.678049 env[1188]: time="2024-02-12T19:38:54.678020262Z" level=info msg="StartContainer for \"073afd4707ae95c34493f1360022a7cfae0850d0e3d7e1bb53450bf9d22db498\"" Feb 12 19:38:54.715861 kubelet[1544]: E0212 19:38:54.715807 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:54.716195 env[1188]: time="2024-02-12T19:38:54.715889792Z" level=info msg="StartContainer for \"073afd4707ae95c34493f1360022a7cfae0850d0e3d7e1bb53450bf9d22db498\" returns successfully" Feb 12 19:38:54.756000 audit[1783]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.756000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffac213a70 a2=0 a3=7fffac213a5c items=0 ppid=1744 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.762826 kernel: audit: type=1325 audit(1707766734.756:197): table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.762887 kernel: audit: type=1300 audit(1707766734.756:197): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffac213a70 a2=0 a3=7fffac213a5c items=0 ppid=1744 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.762906 kernel: audit: type=1327 audit(1707766734.756:197): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:38:54.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:38:54.757000 audit[1784]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.766462 kernel: audit: type=1325 audit(1707766734.757:198): table=nat:36 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.766494 kernel: audit: type=1300 audit(1707766734.757:198): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc04538680 a2=0 a3=7ffc0453866c items=0 ppid=1744 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.757000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc04538680 a2=0 a3=7ffc0453866c items=0 ppid=1744 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.770519 kernel: audit: type=1327 audit(1707766734.757:198): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:38:54.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:38:54.758000 audit[1785]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.774759 kernel: audit: type=1325 audit(1707766734.758:199): table=mangle:37 family=10 entries=1 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.758000 audit[1785]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf9d496e0 a2=0 a3=7ffdf9d496cc items=0 ppid=1744 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.779203 kernel: audit: type=1300 audit(1707766734.758:199): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf9d496e0 a2=0 a3=7ffdf9d496cc items=0 ppid=1744 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.779235 kernel: audit: type=1327 audit(1707766734.758:199): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:38:54.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:38:54.760000 audit[1786]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.783485 kernel: audit: type=1325 audit(1707766734.760:200): table=nat:38 family=10 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.760000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9e900e80 a2=0 a3=7ffc9e900e6c items=0 ppid=1744 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:38:54.760000 audit[1787]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.760000 audit[1787]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe7c8e7b60 a2=0 a3=7ffe7c8e7b4c items=0 ppid=1744 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.760000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:38:54.761000 audit[1788]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.761000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff62d99760 a2=0 a3=7fff62d9974c items=0 ppid=1744 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:38:54.835186 kubelet[1544]: E0212 19:38:54.835157 1544 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:38:54.859000 audit[1789]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.859000 audit[1789]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc3ba5d200 a2=0 a3=7ffc3ba5d1ec items=0 ppid=1744 pid=1789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:38:54.862000 audit[1791]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.862000 audit[1791]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff074ddbf0 a2=0 a3=7fff074ddbdc items=0 ppid=1744 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 12 19:38:54.865664 kubelet[1544]: E0212 19:38:54.865627 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:38:54.865000 audit[1794]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.865000 audit[1794]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc55e76c80 a2=0 a3=7ffc55e76c6c items=0 ppid=1744 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 12 19:38:54.866000 audit[1795]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.866000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe53bccdf0 a2=0 a3=7ffe53bccddc items=0 ppid=1744 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:38:54.869000 audit[1797]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.869000 audit[1797]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd13a73d70 a2=0 a3=7ffd13a73d5c items=0 ppid=1744 pid=1797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:38:54.869000 audit[1798]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.869000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8ad1a070 a2=0 a3=7ffc8ad1a05c items=0 ppid=1744 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:38:54.872000 audit[1800]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.872000 audit[1800]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd59129710 a2=0 a3=7ffd591296fc items=0 ppid=1744 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:38:54.875000 audit[1803]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.875000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffa551bfe0 a2=0 a3=7fffa551bfcc items=0 ppid=1744 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.875000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 12 19:38:54.876000 audit[1804]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.876000 audit[1804]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd4b15200 a2=0 a3=7ffcd4b151ec items=0 ppid=1744 pid=1804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:38:54.878000 audit[1806]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1806 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.878000 audit[1806]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc2a1292b0 a2=0 a3=7ffc2a12929c items=0 ppid=1744 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:38:54.878000 audit[1807]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.878000 audit[1807]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe947a5c90 a2=0 a3=7ffe947a5c7c items=0 ppid=1744 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:38:54.880000 audit[1809]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.880000 audit[1809]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf246c9d0 a2=0 a3=7ffdf246c9bc items=0 ppid=1744 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:38:54.883000 audit[1812]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.883000 audit[1812]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0bf5d3b0 a2=0 a3=7ffe0bf5d39c items=0 ppid=1744 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:38:54.886000 audit[1815]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.886000 audit[1815]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd435f61a0 a2=0 a3=7ffd435f618c items=0 ppid=1744 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:38:54.887000 audit[1816]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.887000 audit[1816]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffefa580b40 a2=0 a3=7ffefa580b2c items=0 ppid=1744 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:38:54.888000 audit[1818]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.888000 audit[1818]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe639fee30 a2=0 a3=7ffe639fee1c items=0 ppid=1744 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.888000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:38:54.891000 audit[1821]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:38:54.891000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc9d11a6a0 a2=0 a3=7ffc9d11a68c items=0 ppid=1744 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:38:54.896000 audit[1825]: NETFILTER_CFG table=filter:58 family=2 entries=3 op=nft_register_rule pid=1825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:38:54.896000 audit[1825]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffde28774a0 a2=0 a3=7ffde287748c items=0 ppid=1744 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:38:54.904000 audit[1825]: NETFILTER_CFG table=nat:59 family=2 entries=57 op=nft_register_chain pid=1825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:38:54.904000 audit[1825]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffde28774a0 a2=0 a3=7ffde287748c items=0 ppid=1744 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:38:54.931044 kubelet[1544]: E0212 19:38:54.931018 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.931044 kubelet[1544]: W0212 19:38:54.931036 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.931044 kubelet[1544]: E0212 19:38:54.931054 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.931237 kubelet[1544]: E0212 19:38:54.931219 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.931237 kubelet[1544]: W0212 19:38:54.931227 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.931237 kubelet[1544]: E0212 19:38:54.931236 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.931384 kubelet[1544]: E0212 19:38:54.931372 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.931384 kubelet[1544]: W0212 19:38:54.931380 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.931384 kubelet[1544]: E0212 19:38:54.931389 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.931567 kubelet[1544]: E0212 19:38:54.931554 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.931567 kubelet[1544]: W0212 19:38:54.931563 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.931632 kubelet[1544]: E0212 19:38:54.931572 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.931725 kubelet[1544]: E0212 19:38:54.931713 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.931725 kubelet[1544]: W0212 19:38:54.931721 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.931819 kubelet[1544]: E0212 19:38:54.931729 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.931892 kubelet[1544]: E0212 19:38:54.931878 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.931892 kubelet[1544]: W0212 19:38:54.931887 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.931959 kubelet[1544]: E0212 19:38:54.931901 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.932085 kubelet[1544]: E0212 19:38:54.932063 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.932085 kubelet[1544]: W0212 19:38:54.932072 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.932085 kubelet[1544]: E0212 19:38:54.932081 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.932266 kubelet[1544]: E0212 19:38:54.932253 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.932266 kubelet[1544]: W0212 19:38:54.932263 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.932315 kubelet[1544]: E0212 19:38:54.932272 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.932433 kubelet[1544]: E0212 19:38:54.932422 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.932433 kubelet[1544]: W0212 19:38:54.932431 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.932485 kubelet[1544]: E0212 19:38:54.932440 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.932682 kubelet[1544]: E0212 19:38:54.932665 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.932725 kubelet[1544]: W0212 19:38:54.932683 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.932725 kubelet[1544]: E0212 19:38:54.932707 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.932873 kubelet[1544]: E0212 19:38:54.932862 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.932873 kubelet[1544]: W0212 19:38:54.932869 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.932940 kubelet[1544]: E0212 19:38:54.932879 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.933048 kubelet[1544]: E0212 19:38:54.933035 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.933074 kubelet[1544]: W0212 19:38:54.933054 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.933074 kubelet[1544]: E0212 19:38:54.933065 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.933231 kubelet[1544]: E0212 19:38:54.933208 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.933231 kubelet[1544]: W0212 19:38:54.933226 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.933279 kubelet[1544]: E0212 19:38:54.933234 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.933378 kubelet[1544]: E0212 19:38:54.933367 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.933378 kubelet[1544]: W0212 19:38:54.933375 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.933462 kubelet[1544]: E0212 19:38:54.933383 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.933511 kubelet[1544]: E0212 19:38:54.933492 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.933511 kubelet[1544]: W0212 19:38:54.933500 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.933567 kubelet[1544]: E0212 19:38:54.933517 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.933643 kubelet[1544]: E0212 19:38:54.933631 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:54.933689 kubelet[1544]: W0212 19:38:54.933649 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:54.933689 kubelet[1544]: E0212 19:38:54.933659 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:54.941000 audit[1874]: NETFILTER_CFG table=filter:60 family=2 entries=6 op=nft_register_rule pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:38:54.941000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd740be470 a2=0 a3=7ffd740be45c items=0 ppid=1744 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:38:54.949000 audit[1874]: NETFILTER_CFG table=nat:61 family=2 entries=78 op=nft_register_rule pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:38:54.949000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd740be470 a2=0 a3=7ffd740be45c items=0 ppid=1744 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:38:54.949000 audit[1875]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.949000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe90433950 a2=0 a3=7ffe9043393c items=0 ppid=1744 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:38:54.951000 audit[1877]: NETFILTER_CFG table=filter:63 family=10 entries=2 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.951000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff8f9ff1c0 a2=0 a3=7fff8f9ff1ac items=0 ppid=1744 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 12 19:38:54.956000 audit[1880]: NETFILTER_CFG table=filter:64 family=10 entries=2 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.956000 audit[1880]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffda0eed5d0 a2=0 a3=7ffda0eed5bc items=0 ppid=1744 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 12 19:38:54.957000 audit[1881]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.957000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee8cb8e30 a2=0 a3=7ffee8cb8e1c items=0 ppid=1744 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:38:54.958000 audit[1883]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.958000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1518c5f0 a2=0 a3=7ffd1518c5dc items=0 ppid=1744 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:38:54.959000 audit[1884]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.959000 audit[1884]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce9ccd2e0 a2=0 a3=7ffce9ccd2cc items=0 ppid=1744 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.959000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:38:54.961000 audit[1886]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_rule pid=1886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.961000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd5f9b3ef0 a2=0 a3=7ffd5f9b3edc items=0 ppid=1744 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 12 19:38:54.964000 audit[1889]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.964000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc9ef31710 a2=0 a3=7ffc9ef316fc items=0 ppid=1744 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:38:54.965000 audit[1890]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.965000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9625c6f0 a2=0 a3=7ffd9625c6dc items=0 ppid=1744 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:38:54.967000 audit[1892]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1892 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.967000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc8689e6b0 a2=0 a3=7ffc8689e69c items=0 ppid=1744 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.967000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:38:54.967000 audit[1893]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.967000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe197ea9b0 a2=0 a3=7ffe197ea99c items=0 ppid=1744 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.967000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:38:54.969000 audit[1895]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1895 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.969000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc59f08b50 a2=0 a3=7ffc59f08b3c items=0 ppid=1744 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:38:54.972000 audit[1898]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=1898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.972000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe843e9550 a2=0 a3=7ffe843e953c items=0 ppid=1744 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:38:54.974000 audit[1901]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_rule pid=1901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.974000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec7ae4880 a2=0 a3=7ffec7ae486c items=0 ppid=1744 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 12 19:38:54.975000 audit[1902]: NETFILTER_CFG table=nat:76 family=10 entries=1 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.975000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee30b1c40 a2=0 a3=7ffee30b1c2c items=0 ppid=1744 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:38:54.977000 audit[1904]: NETFILTER_CFG table=nat:77 family=10 entries=2 op=nft_register_chain pid=1904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.977000 audit[1904]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff48929090 a2=0 a3=7fff4892907c items=0 ppid=1744 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:38:54.979000 audit[1907]: NETFILTER_CFG table=nat:78 family=10 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:38:54.979000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffce79c2d10 a2=0 a3=7ffce79c2cfc items=0 ppid=1744 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:38:54.983000 audit[1911]: NETFILTER_CFG table=filter:79 family=10 entries=3 op=nft_register_rule pid=1911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:38:54.983000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe5157e670 a2=0 a3=7ffe5157e65c items=0 ppid=1744 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.983000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:38:54.984000 audit[1911]: NETFILTER_CFG table=nat:80 family=10 entries=10 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:38:54.984000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffe5157e670 a2=0 a3=7ffe5157e65c items=0 ppid=1744 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:38:54.984000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:38:55.006040 kubelet[1544]: E0212 19:38:55.006017 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.006040 kubelet[1544]: W0212 19:38:55.006033 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.006128 kubelet[1544]: E0212 19:38:55.006053 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.006236 kubelet[1544]: E0212 19:38:55.006222 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.006236 kubelet[1544]: W0212 19:38:55.006232 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.006281 kubelet[1544]: E0212 19:38:55.006250 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.006437 kubelet[1544]: E0212 19:38:55.006426 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.006437 kubelet[1544]: W0212 19:38:55.006436 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.006506 kubelet[1544]: E0212 19:38:55.006452 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.006662 kubelet[1544]: E0212 19:38:55.006646 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.006662 kubelet[1544]: W0212 19:38:55.006656 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.006706 kubelet[1544]: E0212 19:38:55.006672 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.006799 kubelet[1544]: E0212 19:38:55.006790 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.006799 kubelet[1544]: W0212 19:38:55.006798 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.006864 kubelet[1544]: E0212 19:38:55.006813 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.006961 kubelet[1544]: E0212 19:38:55.006950 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.006961 kubelet[1544]: W0212 19:38:55.006958 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.007021 kubelet[1544]: E0212 19:38:55.006974 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.007239 kubelet[1544]: E0212 19:38:55.007215 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.007239 kubelet[1544]: W0212 19:38:55.007237 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.007310 kubelet[1544]: E0212 19:38:55.007262 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.007427 kubelet[1544]: E0212 19:38:55.007415 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.007427 kubelet[1544]: W0212 19:38:55.007424 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.007473 kubelet[1544]: E0212 19:38:55.007437 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.007585 kubelet[1544]: E0212 19:38:55.007575 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.007607 kubelet[1544]: W0212 19:38:55.007586 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.007607 kubelet[1544]: E0212 19:38:55.007601 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.007719 kubelet[1544]: E0212 19:38:55.007711 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.007741 kubelet[1544]: W0212 19:38:55.007719 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.007741 kubelet[1544]: E0212 19:38:55.007727 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.007870 kubelet[1544]: E0212 19:38:55.007857 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.007870 kubelet[1544]: W0212 19:38:55.007866 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.007935 kubelet[1544]: E0212 19:38:55.007875 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.008093 kubelet[1544]: E0212 19:38:55.008083 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.008093 kubelet[1544]: W0212 19:38:55.008091 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.008156 kubelet[1544]: E0212 19:38:55.008099 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.716698 kubelet[1544]: E0212 19:38:55.716622 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:55.867214 kubelet[1544]: E0212 19:38:55.867183 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:38:55.937842 kubelet[1544]: E0212 19:38:55.937808 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.937842 kubelet[1544]: W0212 19:38:55.937826 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.937842 kubelet[1544]: E0212 19:38:55.937848 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.938065 kubelet[1544]: E0212 19:38:55.938034 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.938065 kubelet[1544]: W0212 19:38:55.938043 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.938065 kubelet[1544]: E0212 19:38:55.938056 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.938256 kubelet[1544]: E0212 19:38:55.938240 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.938256 kubelet[1544]: W0212 19:38:55.938255 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.938397 kubelet[1544]: E0212 19:38:55.938273 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.938497 kubelet[1544]: E0212 19:38:55.938484 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.938497 kubelet[1544]: W0212 19:38:55.938495 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.938566 kubelet[1544]: E0212 19:38:55.938507 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.938669 kubelet[1544]: E0212 19:38:55.938657 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.938669 kubelet[1544]: W0212 19:38:55.938666 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.938752 kubelet[1544]: E0212 19:38:55.938679 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.938850 kubelet[1544]: E0212 19:38:55.938835 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.938850 kubelet[1544]: W0212 19:38:55.938846 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.938945 kubelet[1544]: E0212 19:38:55.938863 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.939045 kubelet[1544]: E0212 19:38:55.939033 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.939045 kubelet[1544]: W0212 19:38:55.939042 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.939118 kubelet[1544]: E0212 19:38:55.939055 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.939213 kubelet[1544]: E0212 19:38:55.939202 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.939213 kubelet[1544]: W0212 19:38:55.939212 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.939282 kubelet[1544]: E0212 19:38:55.939225 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.939400 kubelet[1544]: E0212 19:38:55.939389 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.939400 kubelet[1544]: W0212 19:38:55.939398 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.939479 kubelet[1544]: E0212 19:38:55.939411 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.939631 kubelet[1544]: E0212 19:38:55.939617 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.939631 kubelet[1544]: W0212 19:38:55.939627 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.939711 kubelet[1544]: E0212 19:38:55.939644 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.939808 kubelet[1544]: E0212 19:38:55.939795 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.939808 kubelet[1544]: W0212 19:38:55.939805 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.939882 kubelet[1544]: E0212 19:38:55.939817 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.939982 kubelet[1544]: E0212 19:38:55.939967 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.939982 kubelet[1544]: W0212 19:38:55.939977 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.940067 kubelet[1544]: E0212 19:38:55.939995 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.940179 kubelet[1544]: E0212 19:38:55.940164 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.940179 kubelet[1544]: W0212 19:38:55.940174 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.940264 kubelet[1544]: E0212 19:38:55.940192 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.940381 kubelet[1544]: E0212 19:38:55.940367 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.940381 kubelet[1544]: W0212 19:38:55.940378 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.940462 kubelet[1544]: E0212 19:38:55.940395 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.940567 kubelet[1544]: E0212 19:38:55.940553 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.940567 kubelet[1544]: W0212 19:38:55.940563 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.940663 kubelet[1544]: E0212 19:38:55.940581 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:55.940752 kubelet[1544]: E0212 19:38:55.940739 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:55.940752 kubelet[1544]: W0212 19:38:55.940749 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:55.940819 kubelet[1544]: E0212 19:38:55.940761 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.011186 kubelet[1544]: E0212 19:38:56.011150 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.011186 kubelet[1544]: W0212 19:38:56.011171 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.011186 kubelet[1544]: E0212 19:38:56.011190 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.011419 kubelet[1544]: E0212 19:38:56.011396 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.011419 kubelet[1544]: W0212 19:38:56.011408 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.011419 kubelet[1544]: E0212 19:38:56.011422 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.011614 kubelet[1544]: E0212 19:38:56.011597 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.011614 kubelet[1544]: W0212 19:38:56.011607 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.011696 kubelet[1544]: E0212 19:38:56.011622 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.011779 kubelet[1544]: E0212 19:38:56.011765 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.011779 kubelet[1544]: W0212 19:38:56.011775 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.011839 kubelet[1544]: E0212 19:38:56.011791 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.011957 kubelet[1544]: E0212 19:38:56.011937 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.011957 kubelet[1544]: W0212 19:38:56.011951 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.012025 kubelet[1544]: E0212 19:38:56.011969 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.012127 kubelet[1544]: E0212 19:38:56.012118 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.012127 kubelet[1544]: W0212 19:38:56.012125 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.012192 kubelet[1544]: E0212 19:38:56.012136 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.012269 kubelet[1544]: E0212 19:38:56.012260 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.012269 kubelet[1544]: W0212 19:38:56.012266 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.012358 kubelet[1544]: E0212 19:38:56.012278 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.012418 kubelet[1544]: E0212 19:38:56.012408 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.012418 kubelet[1544]: W0212 19:38:56.012415 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.012481 kubelet[1544]: E0212 19:38:56.012427 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.012629 kubelet[1544]: E0212 19:38:56.012614 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.012629 kubelet[1544]: W0212 19:38:56.012626 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.012736 kubelet[1544]: E0212 19:38:56.012651 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.012881 kubelet[1544]: E0212 19:38:56.012862 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.012881 kubelet[1544]: W0212 19:38:56.012871 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.012881 kubelet[1544]: E0212 19:38:56.012882 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.013044 kubelet[1544]: E0212 19:38:56.013032 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.013044 kubelet[1544]: W0212 19:38:56.013042 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.013117 kubelet[1544]: E0212 19:38:56.013059 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.013227 kubelet[1544]: E0212 19:38:56.013213 1544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:38:56.013227 kubelet[1544]: W0212 19:38:56.013223 1544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:38:56.013305 kubelet[1544]: E0212 19:38:56.013240 1544 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:38:56.649473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096755774.mount: Deactivated successfully. Feb 12 19:38:56.717421 kubelet[1544]: E0212 19:38:56.717374 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:56.835485 kubelet[1544]: E0212 19:38:56.835437 1544 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:38:57.718280 kubelet[1544]: E0212 19:38:57.718241 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:58.063143 env[1188]: time="2024-02-12T19:38:58.063092588Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:58.065684 env[1188]: time="2024-02-12T19:38:58.065645132Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:58.067740 env[1188]: time="2024-02-12T19:38:58.067716752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:58.069506 env[1188]: time="2024-02-12T19:38:58.069485752Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:38:58.070435 env[1188]: time="2024-02-12T19:38:58.070401205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 12 19:38:58.072152 env[1188]: time="2024-02-12T19:38:58.072129252Z" level=info msg="CreateContainer within sandbox \"d7045a681bde6c9ecc74dfa27dd74cbdd84da7a94b90d3c009ebcf4c51174de5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 12 19:38:58.082411 env[1188]: time="2024-02-12T19:38:58.082370792Z" level=info msg="CreateContainer within sandbox \"d7045a681bde6c9ecc74dfa27dd74cbdd84da7a94b90d3c009ebcf4c51174de5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"54e7cce66d2e5e2ab0ef97b60ffc05cfbb7dcd142994363ff5f3fcb91aa893ec\"" Feb 12 19:38:58.082831 env[1188]: time="2024-02-12T19:38:58.082804730Z" level=info msg="StartContainer for \"54e7cce66d2e5e2ab0ef97b60ffc05cfbb7dcd142994363ff5f3fcb91aa893ec\"" Feb 12 19:38:58.199818 env[1188]: time="2024-02-12T19:38:58.199781412Z" level=info msg="StartContainer for \"54e7cce66d2e5e2ab0ef97b60ffc05cfbb7dcd142994363ff5f3fcb91aa893ec\" returns successfully" Feb 12 19:38:58.210149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54e7cce66d2e5e2ab0ef97b60ffc05cfbb7dcd142994363ff5f3fcb91aa893ec-rootfs.mount: Deactivated successfully. Feb 12 19:38:58.675488 env[1188]: time="2024-02-12T19:38:58.675433400Z" level=info msg="shim disconnected" id=54e7cce66d2e5e2ab0ef97b60ffc05cfbb7dcd142994363ff5f3fcb91aa893ec Feb 12 19:38:58.675488 env[1188]: time="2024-02-12T19:38:58.675472110Z" level=warning msg="cleaning up after shim disconnected" id=54e7cce66d2e5e2ab0ef97b60ffc05cfbb7dcd142994363ff5f3fcb91aa893ec namespace=k8s.io Feb 12 19:38:58.675488 env[1188]: time="2024-02-12T19:38:58.675480817Z" level=info msg="cleaning up dead shim" Feb 12 19:38:58.681367 env[1188]: time="2024-02-12T19:38:58.681320208Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:38:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1998 runtime=io.containerd.runc.v2\n" Feb 12 19:38:58.718986 kubelet[1544]: E0212 19:38:58.718938 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:38:58.835709 kubelet[1544]: E0212 19:38:58.835678 1544 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:38:58.871429 kubelet[1544]: E0212 19:38:58.871402 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:38:58.872024 env[1188]: time="2024-02-12T19:38:58.871978145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 12 19:38:58.881355 kubelet[1544]: I0212 19:38:58.881312 1544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fzhqv" podStartSLOduration=-9.223372023973503e+09 pod.CreationTimestamp="2024-02-12 19:38:46 +0000 UTC" firstStartedPulling="2024-02-12 19:38:52.51461229 +0000 UTC m=+19.091336989" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:38:54.873312025 +0000 UTC m=+21.450036734" watchObservedRunningTime="2024-02-12 19:38:58.881272376 +0000 UTC m=+25.457997075" Feb 12 19:38:59.719154 kubelet[1544]: E0212 19:38:59.719087 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:00.720016 kubelet[1544]: E0212 19:39:00.719971 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:00.835492 kubelet[1544]: E0212 19:39:00.835457 1544 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:39:00.957957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200573140.mount: Deactivated successfully. Feb 12 19:39:01.720454 kubelet[1544]: E0212 19:39:01.720429 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:02.720777 kubelet[1544]: E0212 19:39:02.720736 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:02.835474 kubelet[1544]: E0212 19:39:02.835430 1544 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:39:03.721388 kubelet[1544]: E0212 19:39:03.721323 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:04.721805 kubelet[1544]: E0212 19:39:04.721764 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:04.835216 kubelet[1544]: E0212 19:39:04.835188 1544 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:39:05.407552 env[1188]: time="2024-02-12T19:39:05.407483109Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:05.410563 env[1188]: time="2024-02-12T19:39:05.410526154Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:05.412635 env[1188]: time="2024-02-12T19:39:05.412606930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:05.418477 env[1188]: time="2024-02-12T19:39:05.418373203Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:05.418925 env[1188]: time="2024-02-12T19:39:05.418883480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 12 19:39:05.420449 env[1188]: time="2024-02-12T19:39:05.420412416Z" level=info msg="CreateContainer within sandbox \"d7045a681bde6c9ecc74dfa27dd74cbdd84da7a94b90d3c009ebcf4c51174de5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 19:39:05.434275 env[1188]: time="2024-02-12T19:39:05.434221036Z" level=info msg="CreateContainer within sandbox \"d7045a681bde6c9ecc74dfa27dd74cbdd84da7a94b90d3c009ebcf4c51174de5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"345fa1e7718926edebc1abe74582c44ace19a2dafc5a5d2af30caf7a62c4423b\"" Feb 12 19:39:05.434751 env[1188]: time="2024-02-12T19:39:05.434719428Z" level=info msg="StartContainer for \"345fa1e7718926edebc1abe74582c44ace19a2dafc5a5d2af30caf7a62c4423b\"" Feb 12 19:39:05.528625 env[1188]: time="2024-02-12T19:39:05.528575104Z" level=info msg="StartContainer for \"345fa1e7718926edebc1abe74582c44ace19a2dafc5a5d2af30caf7a62c4423b\" returns successfully" Feb 12 19:39:05.722788 kubelet[1544]: E0212 19:39:05.722651 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:05.882216 kubelet[1544]: E0212 19:39:05.882190 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:39:06.723708 kubelet[1544]: E0212 19:39:06.723665 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:06.834767 kubelet[1544]: E0212 19:39:06.834736 1544 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:39:06.883964 kubelet[1544]: E0212 19:39:06.883936 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:39:07.174199 update_engine[1175]: I0212 19:39:07.174139 1175 update_attempter.cc:509] Updating boot flags... Feb 12 19:39:07.723862 kubelet[1544]: E0212 19:39:07.723815 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:07.797212 env[1188]: time="2024-02-12T19:39:07.797120443Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:39:07.812475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-345fa1e7718926edebc1abe74582c44ace19a2dafc5a5d2af30caf7a62c4423b-rootfs.mount: Deactivated successfully. Feb 12 19:39:07.855348 kubelet[1544]: I0212 19:39:07.855292 1544 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:39:08.188033 env[1188]: time="2024-02-12T19:39:08.187987591Z" level=info msg="shim disconnected" id=345fa1e7718926edebc1abe74582c44ace19a2dafc5a5d2af30caf7a62c4423b Feb 12 19:39:08.188033 env[1188]: time="2024-02-12T19:39:08.188029234Z" level=warning msg="cleaning up after shim disconnected" id=345fa1e7718926edebc1abe74582c44ace19a2dafc5a5d2af30caf7a62c4423b namespace=k8s.io Feb 12 19:39:08.188033 env[1188]: time="2024-02-12T19:39:08.188036778Z" level=info msg="cleaning up dead shim" Feb 12 19:39:08.194005 env[1188]: time="2024-02-12T19:39:08.193958422Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:39:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2086 runtime=io.containerd.runc.v2\n" Feb 12 19:39:08.724589 kubelet[1544]: E0212 19:39:08.724516 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:08.837013 env[1188]: time="2024-02-12T19:39:08.836969691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-726pv,Uid:285c6db4-31c3-48f8-ba5c-36202e194f38,Namespace:calico-system,Attempt:0,}" Feb 12 19:39:08.887287 kubelet[1544]: E0212 19:39:08.887263 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:39:08.887912 env[1188]: time="2024-02-12T19:39:08.887883225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 12 19:39:08.894032 env[1188]: time="2024-02-12T19:39:08.893974863Z" level=error msg="Failed to destroy network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:08.894369 env[1188]: time="2024-02-12T19:39:08.894328182Z" level=error msg="encountered an error cleaning up failed sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:08.894412 env[1188]: time="2024-02-12T19:39:08.894390706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-726pv,Uid:285c6db4-31c3-48f8-ba5c-36202e194f38,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:08.894629 kubelet[1544]: E0212 19:39:08.894605 1544 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:08.894689 kubelet[1544]: E0212 19:39:08.894669 1544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-726pv" Feb 12 19:39:08.894716 kubelet[1544]: E0212 19:39:08.894698 1544 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-726pv" Feb 12 19:39:08.894815 kubelet[1544]: E0212 19:39:08.894801 1544 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-726pv_calico-system(285c6db4-31c3-48f8-ba5c-36202e194f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-726pv_calico-system(285c6db4-31c3-48f8-ba5c-36202e194f38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:39:08.895528 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942-shm.mount: Deactivated successfully. Feb 12 19:39:09.018778 kubelet[1544]: I0212 19:39:09.018740 1544 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:39:09.130953 kubelet[1544]: I0212 19:39:09.130910 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x45ch\" (UniqueName: \"kubernetes.io/projected/63c921d3-6482-4280-a1be-aea3dbb6c75c-kube-api-access-x45ch\") pod \"nginx-deployment-8ffc5cf85-d69g6\" (UID: \"63c921d3-6482-4280-a1be-aea3dbb6c75c\") " pod="default/nginx-deployment-8ffc5cf85-d69g6" Feb 12 19:39:09.321939 env[1188]: time="2024-02-12T19:39:09.321854817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-d69g6,Uid:63c921d3-6482-4280-a1be-aea3dbb6c75c,Namespace:default,Attempt:0,}" Feb 12 19:39:09.365323 env[1188]: time="2024-02-12T19:39:09.365273877Z" level=error msg="Failed to destroy network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:09.365591 env[1188]: time="2024-02-12T19:39:09.365564200Z" level=error msg="encountered an error cleaning up failed sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:09.365626 env[1188]: time="2024-02-12T19:39:09.365602556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-d69g6,Uid:63c921d3-6482-4280-a1be-aea3dbb6c75c,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:09.365838 kubelet[1544]: E0212 19:39:09.365818 1544 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:09.365905 kubelet[1544]: E0212 19:39:09.365875 1544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-d69g6" Feb 12 19:39:09.365905 kubelet[1544]: E0212 19:39:09.365895 1544 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-d69g6" Feb 12 19:39:09.365959 kubelet[1544]: E0212 19:39:09.365955 1544 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-d69g6_default(63c921d3-6482-4280-a1be-aea3dbb6c75c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-d69g6_default(63c921d3-6482-4280-a1be-aea3dbb6c75c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-d69g6" podUID=63c921d3-6482-4280-a1be-aea3dbb6c75c Feb 12 19:39:09.725398 kubelet[1544]: E0212 19:39:09.725282 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:09.845979 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8-shm.mount: Deactivated successfully. Feb 12 19:39:09.888951 kubelet[1544]: I0212 19:39:09.888914 1544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:09.889493 env[1188]: time="2024-02-12T19:39:09.889465490Z" level=info msg="StopPodSandbox for \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\"" Feb 12 19:39:09.889819 kubelet[1544]: I0212 19:39:09.889801 1544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:09.890257 env[1188]: time="2024-02-12T19:39:09.890238534Z" level=info msg="StopPodSandbox for \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\"" Feb 12 19:39:09.910583 env[1188]: time="2024-02-12T19:39:09.910524057Z" level=error msg="StopPodSandbox for \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\" failed" error="failed to destroy network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:09.910784 kubelet[1544]: E0212 19:39:09.910749 1544 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:09.910949 kubelet[1544]: E0212 19:39:09.910813 1544 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942} Feb 12 19:39:09.910949 kubelet[1544]: E0212 19:39:09.910844 1544 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"285c6db4-31c3-48f8-ba5c-36202e194f38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:39:09.910949 kubelet[1544]: E0212 19:39:09.910870 1544 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"285c6db4-31c3-48f8-ba5c-36202e194f38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-726pv" podUID=285c6db4-31c3-48f8-ba5c-36202e194f38 Feb 12 19:39:09.919743 env[1188]: time="2024-02-12T19:39:09.919681802Z" level=error msg="StopPodSandbox for \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\" failed" error="failed to destroy network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:39:09.919923 kubelet[1544]: E0212 19:39:09.919884 1544 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:09.919958 kubelet[1544]: E0212 19:39:09.919929 1544 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8} Feb 12 19:39:09.919985 kubelet[1544]: E0212 19:39:09.919960 1544 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63c921d3-6482-4280-a1be-aea3dbb6c75c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:39:09.919985 kubelet[1544]: E0212 19:39:09.919985 1544 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63c921d3-6482-4280-a1be-aea3dbb6c75c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-d69g6" podUID=63c921d3-6482-4280-a1be-aea3dbb6c75c Feb 12 19:39:10.726349 kubelet[1544]: E0212 19:39:10.726296 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:11.726600 kubelet[1544]: E0212 19:39:11.726566 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:12.727751 kubelet[1544]: E0212 19:39:12.727703 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:13.703850 kubelet[1544]: E0212 19:39:13.703803 1544 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:13.728029 kubelet[1544]: E0212 19:39:13.728007 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:14.728757 kubelet[1544]: E0212 19:39:14.728692 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:15.647119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548765528.mount: Deactivated successfully. Feb 12 19:39:15.729276 kubelet[1544]: E0212 19:39:15.729227 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:16.364460 env[1188]: time="2024-02-12T19:39:16.364401836Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:16.366841 env[1188]: time="2024-02-12T19:39:16.366785914Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:16.368575 env[1188]: time="2024-02-12T19:39:16.368521449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:16.371004 env[1188]: time="2024-02-12T19:39:16.370967777Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:16.371396 env[1188]: time="2024-02-12T19:39:16.371357837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 12 19:39:16.380362 env[1188]: time="2024-02-12T19:39:16.380307955Z" level=info msg="CreateContainer within sandbox \"d7045a681bde6c9ecc74dfa27dd74cbdd84da7a94b90d3c009ebcf4c51174de5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 12 19:39:16.395004 env[1188]: time="2024-02-12T19:39:16.394948262Z" level=info msg="CreateContainer within sandbox \"d7045a681bde6c9ecc74dfa27dd74cbdd84da7a94b90d3c009ebcf4c51174de5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"62dcac25c533e274f90401639dd9296b4dfcd01dd34755de50ffb38b09023219\"" Feb 12 19:39:16.395529 env[1188]: time="2024-02-12T19:39:16.395500235Z" level=info msg="StartContainer for \"62dcac25c533e274f90401639dd9296b4dfcd01dd34755de50ffb38b09023219\"" Feb 12 19:39:16.494443 env[1188]: time="2024-02-12T19:39:16.494393915Z" level=info msg="StartContainer for \"62dcac25c533e274f90401639dd9296b4dfcd01dd34755de50ffb38b09023219\" returns successfully" Feb 12 19:39:16.580065 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 12 19:39:16.580170 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 12 19:39:16.730321 kubelet[1544]: E0212 19:39:16.730192 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:16.901925 kubelet[1544]: E0212 19:39:16.901900 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:39:17.730482 kubelet[1544]: E0212 19:39:17.730414 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:17.772655 kernel: kauditd_printk_skb: 128 callbacks suppressed Feb 12 19:39:17.772794 kernel: audit: type=1400 audit(1707766757.763:243): avc: denied { write } for pid=2365 comm="tee" name="fd" dev="proc" ino=21612 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:39:17.772832 kernel: audit: type=1300 audit(1707766757.763:243): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffddc026997 a2=241 a3=1b6 items=1 ppid=2330 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:17.772867 kernel: audit: type=1307 audit(1707766757.763:243): cwd="/etc/service/enabled/felix/log" Feb 12 19:39:17.763000 audit[2365]: AVC avc: denied { write } for pid=2365 comm="tee" name="fd" dev="proc" ino=21612 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:39:17.763000 audit[2365]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffddc026997 a2=241 a3=1b6 items=1 ppid=2330 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:17.763000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 12 19:39:17.776185 kernel: audit: type=1302 audit(1707766757.763:243): item=0 name="/dev/fd/63" inode=21609 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:17.763000 audit: PATH item=0 name="/dev/fd/63" inode=21609 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:17.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:39:17.782639 kernel: audit: type=1327 audit(1707766757.763:243): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:39:17.785000 audit[2380]: AVC avc: denied { write } for pid=2380 comm="tee" name="fd" dev="proc" ino=19977 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:39:17.790358 kernel: audit: type=1400 audit(1707766757.785:244): avc: denied { write } for pid=2380 comm="tee" name="fd" dev="proc" ino=19977 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:39:17.785000 audit[2380]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffcad21987 a2=241 a3=1b6 items=1 ppid=2322 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:17.801065 kernel: audit: type=1300 audit(1707766757.785:244): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffcad21987 a2=241 a3=1b6 items=1 ppid=2322 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:17.801134 kernel: audit: type=1307 audit(1707766757.785:244): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 19:39:17.785000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 19:39:17.804082 kernel: audit: type=1302 audit(1707766757.785:244): item=0 name="/dev/fd/63" inode=19969 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:17.785000 audit: PATH item=0 name="/dev/fd/63" inode=19969 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:17.806683 kernel: audit: type=1327 audit(1707766757.785:244): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:39:17.785000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:39:17.786000 audit[2361]: AVC avc: denied { write } for pid=2361 comm="tee" name="fd" dev="proc" ino=19981 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:39:17.786000 audit[2361]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe4a40997 a2=241 a3=1b6 items=1 ppid=2323 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:17.786000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 12 19:39:17.786000 audit: PATH item=0 name="/dev/fd/63" inode=19296 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:17.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:39:17.787000 audit[2385]: AVC avc: denied { write } for pid=2385 comm="tee" name="fd" dev="proc" ino=20758 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:39:17.787000 audit[2385]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffddf51d988 a2=241 a3=1b6 items=1 ppid=2327 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:17.787000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 19:39:17.787000 audit: PATH item=0 name="/dev/fd/63" inode=20755 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:17.787000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:39:17.790000 audit[2383]: AVC avc: denied { write } for pid=2383 comm="tee" name="fd" dev="proc" ino=19988 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:39:17.790000 audit[2383]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffefedc4997 a2=241 a3=1b6 items=1 ppid=2335 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:17.790000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 12 19:39:17.790000 audit: PATH item=0 name="/dev/fd/63" inode=19972 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:17.790000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:39:17.808000 audit[2393]: AVC avc: denied { write } for pid=2393 comm="tee" name="fd" dev="proc" ino=19995 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:39:17.808000 audit[2393]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd39f05999 a2=241 a3=1b6 items=1 ppid=2337 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:17.808000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 12 19:39:17.808000 audit: PATH item=0 name="/dev/fd/63" inode=19985 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:17.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:39:17.818000 audit[2401]: AVC avc: denied { write } for pid=2401 comm="tee" name="fd" dev="proc" ino=19302 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:39:17.818000 audit[2401]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd7a5e998 a2=241 a3=1b6 items=1 ppid=2333 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:17.818000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 12 19:39:17.818000 audit: PATH item=0 name="/dev/fd/63" inode=20765 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:17.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:39:17.903134 kubelet[1544]: E0212 19:39:17.903100 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:39:17.968546 kernel: Initializing XFRM netlink socket Feb 12 19:39:17.966729 systemd[1]: run-containerd-runc-k8s.io-62dcac25c533e274f90401639dd9296b4dfcd01dd34755de50ffb38b09023219-runc.2gCZXu.mount: Deactivated successfully. Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit: BPF prog-id=10 op=LOAD Feb 12 19:39:18.041000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe8610c550 a2=70 a3=7f4a058da000 items=0 ppid=2331 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:39:18.041000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit: BPF prog-id=11 op=LOAD Feb 12 19:39:18.041000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe8610c550 a2=70 a3=6e items=0 ppid=2331 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:39:18.041000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe8610c500 a2=70 a3=7ffe8610c550 items=0 ppid=2331 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit: BPF prog-id=12 op=LOAD Feb 12 19:39:18.041000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe8610c4e0 a2=70 a3=7ffe8610c550 items=0 ppid=2331 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:39:18.041000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe8610c5c0 a2=70 a3=0 items=0 ppid=2331 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe8610c5b0 a2=70 a3=0 items=0 ppid=2331 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:39:18.041000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.041000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffe8610c5f0 a2=70 a3=0 items=0 ppid=2331 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.042000 audit: BPF prog-id=13 op=LOAD Feb 12 19:39:18.042000 audit[2491]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe8610c510 a2=70 a3=ffffffff items=0 ppid=2331 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:39:18.045000 audit[2496]: AVC avc: denied { bpf } for pid=2496 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.045000 audit[2496]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd54201b10 a2=70 a3=208 items=0 ppid=2331 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.045000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:39:18.045000 audit[2496]: AVC avc: denied { bpf } for pid=2496 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:39:18.045000 audit[2496]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd542019e0 a2=70 a3=3 items=0 ppid=2331 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.045000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:39:18.049000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:39:18.049000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=32 a0=3 a1=7ffc7ee93b40 a2=0 a3=0 items=0 ppid=2331 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip" exe="/usr/sbin/ip" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.049000 audit: PROCTITLE proctitle=6970006C696E6B0064656C0063616C69636F5F746D705F41 Feb 12 19:39:18.081000 audit[2523]: NETFILTER_CFG table=mangle:81 family=2 entries=19 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:39:18.081000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffdfab34c90 a2=0 a3=7ffdfab34c7c items=0 ppid=2331 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.081000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:39:18.083000 audit[2524]: NETFILTER_CFG table=nat:82 family=2 entries=16 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:39:18.083000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7fff87ff9ea0 a2=0 a3=5577a5645000 items=0 ppid=2331 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.083000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:39:18.084000 audit[2522]: NETFILTER_CFG table=raw:83 family=2 entries=19 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:39:18.084000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffd8484a980 a2=0 a3=564e6302e000 items=0 ppid=2331 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.084000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:39:18.086000 audit[2527]: NETFILTER_CFG table=filter:84 family=2 entries=39 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:39:18.086000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffc601a2a70 a2=0 a3=55aa44fee000 items=0 ppid=2331 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:18.086000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:39:18.731147 kubelet[1544]: E0212 19:39:18.731113 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:18.978472 systemd-networkd[1069]: vxlan.calico: Link UP Feb 12 19:39:18.978480 systemd-networkd[1069]: vxlan.calico: Gained carrier Feb 12 19:39:19.732115 kubelet[1544]: E0212 19:39:19.732064 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:20.696524 systemd-networkd[1069]: vxlan.calico: Gained IPv6LL Feb 12 19:39:20.732826 kubelet[1544]: E0212 19:39:20.732798 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:21.733826 kubelet[1544]: E0212 19:39:21.733787 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:21.836031 env[1188]: time="2024-02-12T19:39:21.835984885Z" level=info msg="StopPodSandbox for \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\"" Feb 12 19:39:21.836551 env[1188]: time="2024-02-12T19:39:21.835984875Z" level=info msg="StopPodSandbox for \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\"" Feb 12 19:39:21.945829 kubelet[1544]: I0212 19:39:21.945773 1544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-dbtbd" podStartSLOduration=-9.223372000909073e+09 pod.CreationTimestamp="2024-02-12 19:38:46 +0000 UTC" firstStartedPulling="2024-02-12 19:38:52.518828289 +0000 UTC m=+19.095552988" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:39:16.929036248 +0000 UTC m=+43.505760957" watchObservedRunningTime="2024-02-12 19:39:21.94570258 +0000 UTC m=+48.522427279" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.945 [INFO][2570] k8s.go 578: Cleaning up netns ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.945 [INFO][2570] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" iface="eth0" netns="/var/run/netns/cni-6f9bf846-bb6f-71e2-3acc-fbbf295bf0b1" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.945 [INFO][2570] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" iface="eth0" netns="/var/run/netns/cni-6f9bf846-bb6f-71e2-3acc-fbbf295bf0b1" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.946 [INFO][2570] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" iface="eth0" netns="/var/run/netns/cni-6f9bf846-bb6f-71e2-3acc-fbbf295bf0b1" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.946 [INFO][2570] k8s.go 585: Releasing IP address(es) ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.946 [INFO][2570] utils.go 188: Calico CNI releasing IP address ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.962 [INFO][2584] ipam_plugin.go 415: Releasing address using handleID ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" HandleID="k8s-pod-network.24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.963 [INFO][2584] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.963 [INFO][2584] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.970 [WARNING][2584] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" HandleID="k8s-pod-network.24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.970 [INFO][2584] ipam_plugin.go 443: Releasing address using workloadID ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" HandleID="k8s-pod-network.24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.971 [INFO][2584] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:21.973064 env[1188]: 2024-02-12 19:39:21.972 [INFO][2570] k8s.go 591: Teardown processing complete. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:21.973678 env[1188]: time="2024-02-12T19:39:21.973207840Z" level=info msg="TearDown network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\" successfully" Feb 12 19:39:21.973678 env[1188]: time="2024-02-12T19:39:21.973257747Z" level=info msg="StopPodSandbox for \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\" returns successfully" Feb 12 19:39:21.974914 systemd[1]: run-netns-cni\x2d6f9bf846\x2dbb6f\x2d71e2\x2d3acc\x2dfbbf295bf0b1.mount: Deactivated successfully. Feb 12 19:39:21.976026 env[1188]: time="2024-02-12T19:39:21.975999114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-d69g6,Uid:63c921d3-6482-4280-a1be-aea3dbb6c75c,Namespace:default,Attempt:1,}" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.947 [INFO][2569] k8s.go 578: Cleaning up netns ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.947 [INFO][2569] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" iface="eth0" netns="/var/run/netns/cni-141693b8-35ae-fd8a-270d-35adab9fc873" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.948 [INFO][2569] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" iface="eth0" netns="/var/run/netns/cni-141693b8-35ae-fd8a-270d-35adab9fc873" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.948 [INFO][2569] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" iface="eth0" netns="/var/run/netns/cni-141693b8-35ae-fd8a-270d-35adab9fc873" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.948 [INFO][2569] k8s.go 585: Releasing IP address(es) ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.948 [INFO][2569] utils.go 188: Calico CNI releasing IP address ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.966 [INFO][2590] ipam_plugin.go 415: Releasing address using handleID ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" HandleID="k8s-pod-network.4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.966 [INFO][2590] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.971 [INFO][2590] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.977 [WARNING][2590] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" HandleID="k8s-pod-network.4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.977 [INFO][2590] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" HandleID="k8s-pod-network.4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.978 [INFO][2590] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:21.982141 env[1188]: 2024-02-12 19:39:21.980 [INFO][2569] k8s.go 591: Teardown processing complete. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:21.982649 env[1188]: time="2024-02-12T19:39:21.982267146Z" level=info msg="TearDown network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\" successfully" Feb 12 19:39:21.982649 env[1188]: time="2024-02-12T19:39:21.982295511Z" level=info msg="StopPodSandbox for \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\" returns successfully" Feb 12 19:39:21.982930 env[1188]: time="2024-02-12T19:39:21.982901933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-726pv,Uid:285c6db4-31c3-48f8-ba5c-36202e194f38,Namespace:calico-system,Attempt:1,}" Feb 12 19:39:21.984001 systemd[1]: run-netns-cni\x2d141693b8\x2d35ae\x2dfd8a\x2d270d\x2d35adab9fc873.mount: Deactivated successfully. Feb 12 19:39:22.075628 systemd-networkd[1069]: cali4a9d767146b: Link UP Feb 12 19:39:22.077611 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:39:22.077655 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4a9d767146b: link becomes ready Feb 12 19:39:22.077754 systemd-networkd[1069]: cali4a9d767146b: Gained carrier Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.021 [INFO][2599] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0 nginx-deployment-8ffc5cf85- default 63c921d3-6482-4280-a1be-aea3dbb6c75c 961 0 2024-02-12 19:39:09 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.89 nginx-deployment-8ffc5cf85-d69g6 eth0 default [] [] [kns.default ksa.default.default] cali4a9d767146b [] []}} ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Namespace="default" Pod="nginx-deployment-8ffc5cf85-d69g6" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.021 [INFO][2599] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Namespace="default" Pod="nginx-deployment-8ffc5cf85-d69g6" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.044 [INFO][2628] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" HandleID="k8s-pod-network.7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.052 [INFO][2628] ipam_plugin.go 268: Auto assigning IP ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" HandleID="k8s-pod-network.7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcbc0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.89", "pod":"nginx-deployment-8ffc5cf85-d69g6", "timestamp":"2024-02-12 19:39:22.044647869 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.052 [INFO][2628] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.052 [INFO][2628] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.052 [INFO][2628] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.053 [INFO][2628] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" host="10.0.0.89" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.056 [INFO][2628] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.060 [INFO][2628] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.061 [INFO][2628] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.062 [INFO][2628] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.062 [INFO][2628] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" host="10.0.0.89" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.063 [INFO][2628] ipam.go 1682: Creating new handle: k8s-pod-network.7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421 Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.066 [INFO][2628] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" host="10.0.0.89" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.068 [INFO][2628] ipam.go 1216: Successfully claimed IPs: [192.168.98.1/26] block=192.168.98.0/26 handle="k8s-pod-network.7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" host="10.0.0.89" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.068 [INFO][2628] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.1/26] handle="k8s-pod-network.7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" host="10.0.0.89" Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.068 [INFO][2628] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:22.087302 env[1188]: 2024-02-12 19:39:22.068 [INFO][2628] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.1/26] IPv6=[] ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" HandleID="k8s-pod-network.7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:22.088142 env[1188]: 2024-02-12 19:39:22.070 [INFO][2599] k8s.go 385: Populated endpoint ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Namespace="default" Pod="nginx-deployment-8ffc5cf85-d69g6" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"63c921d3-6482-4280-a1be-aea3dbb6c75c", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-d69g6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4a9d767146b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:22.088142 env[1188]: 2024-02-12 19:39:22.070 [INFO][2599] k8s.go 386: Calico CNI using IPs: [192.168.98.1/32] ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Namespace="default" Pod="nginx-deployment-8ffc5cf85-d69g6" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:22.088142 env[1188]: 2024-02-12 19:39:22.070 [INFO][2599] dataplane_linux.go 68: Setting the host side veth name to cali4a9d767146b ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Namespace="default" Pod="nginx-deployment-8ffc5cf85-d69g6" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:22.088142 env[1188]: 2024-02-12 19:39:22.078 [INFO][2599] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Namespace="default" Pod="nginx-deployment-8ffc5cf85-d69g6" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:22.088142 env[1188]: 2024-02-12 19:39:22.078 [INFO][2599] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Namespace="default" Pod="nginx-deployment-8ffc5cf85-d69g6" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"63c921d3-6482-4280-a1be-aea3dbb6c75c", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421", Pod:"nginx-deployment-8ffc5cf85-d69g6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4a9d767146b", MAC:"0a:61:08:13:5e:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:22.088142 env[1188]: 2024-02-12 19:39:22.083 [INFO][2599] k8s.go 491: Wrote updated endpoint to datastore ContainerID="7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421" Namespace="default" Pod="nginx-deployment-8ffc5cf85-d69g6" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:22.093000 audit[2663]: NETFILTER_CFG table=filter:85 family=2 entries=36 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:39:22.093000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=19876 a0=3 a1=7ffdbc5dc4c0 a2=0 a3=7ffdbc5dc4ac items=0 ppid=2331 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:22.093000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:39:22.098315 systemd-networkd[1069]: calibef62572c3a: Link UP Feb 12 19:39:22.099301 systemd-networkd[1069]: calibef62572c3a: Gained carrier Feb 12 19:39:22.099397 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibef62572c3a: link becomes ready Feb 12 19:39:22.103095 env[1188]: time="2024-02-12T19:39:22.103033238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:39:22.103281 env[1188]: time="2024-02-12T19:39:22.103074789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:39:22.103281 env[1188]: time="2024-02-12T19:39:22.103086301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:39:22.103412 env[1188]: time="2024-02-12T19:39:22.103267551Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421 pid=2671 runtime=io.containerd.runc.v2 Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.025 [INFO][2610] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-csi--node--driver--726pv-eth0 csi-node-driver- calico-system 285c6db4-31c3-48f8-ba5c-36202e194f38 962 0 2024-02-12 19:38:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.89 csi-node-driver-726pv eth0 default [] [] [kns.calico-system ksa.calico-system.default] calibef62572c3a [] []}} ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Namespace="calico-system" Pod="csi-node-driver-726pv" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--726pv-" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.025 [INFO][2610] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Namespace="calico-system" Pod="csi-node-driver-726pv" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.047 [INFO][2633] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" HandleID="k8s-pod-network.6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.055 [INFO][2633] ipam_plugin.go 268: Auto assigning IP ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" HandleID="k8s-pod-network.6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000247b00), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.89", "pod":"csi-node-driver-726pv", "timestamp":"2024-02-12 19:39:22.047151912 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.055 [INFO][2633] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.068 [INFO][2633] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.068 [INFO][2633] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.070 [INFO][2633] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" host="10.0.0.89" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.074 [INFO][2633] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.080 [INFO][2633] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.081 [INFO][2633] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.086 [INFO][2633] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.086 [INFO][2633] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" host="10.0.0.89" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.088 [INFO][2633] ipam.go 1682: Creating new handle: k8s-pod-network.6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86 Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.091 [INFO][2633] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" host="10.0.0.89" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.094 [INFO][2633] ipam.go 1216: Successfully claimed IPs: [192.168.98.2/26] block=192.168.98.0/26 handle="k8s-pod-network.6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" host="10.0.0.89" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.094 [INFO][2633] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.2/26] handle="k8s-pod-network.6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" host="10.0.0.89" Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.094 [INFO][2633] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:22.108655 env[1188]: 2024-02-12 19:39:22.094 [INFO][2633] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.2/26] IPv6=[] ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" HandleID="k8s-pod-network.6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:22.109181 env[1188]: 2024-02-12 19:39:22.096 [INFO][2610] k8s.go 385: Populated endpoint ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Namespace="calico-system" Pod="csi-node-driver-726pv" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--726pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-csi--node--driver--726pv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"285c6db4-31c3-48f8-ba5c-36202e194f38", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 38, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"csi-node-driver-726pv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibef62572c3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:22.109181 env[1188]: 2024-02-12 19:39:22.096 [INFO][2610] k8s.go 386: Calico CNI using IPs: [192.168.98.2/32] ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Namespace="calico-system" Pod="csi-node-driver-726pv" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:22.109181 env[1188]: 2024-02-12 19:39:22.096 [INFO][2610] dataplane_linux.go 68: Setting the host side veth name to calibef62572c3a ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Namespace="calico-system" Pod="csi-node-driver-726pv" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:22.109181 env[1188]: 2024-02-12 19:39:22.099 [INFO][2610] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Namespace="calico-system" Pod="csi-node-driver-726pv" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:22.109181 env[1188]: 2024-02-12 19:39:22.101 [INFO][2610] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Namespace="calico-system" Pod="csi-node-driver-726pv" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--726pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-csi--node--driver--726pv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"285c6db4-31c3-48f8-ba5c-36202e194f38", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 38, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86", Pod:"csi-node-driver-726pv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibef62572c3a", MAC:"fe:0e:ac:ad:34:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:22.109181 env[1188]: 2024-02-12 19:39:22.107 [INFO][2610] k8s.go 491: Wrote updated endpoint to datastore ContainerID="6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86" Namespace="calico-system" Pod="csi-node-driver-726pv" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:22.124000 audit[2713]: NETFILTER_CFG table=filter:86 family=2 entries=40 op=nft_register_chain pid=2713 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:39:22.124000 audit[2713]: SYSCALL arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7fff8affe060 a2=0 a3=7fff8affe04c items=0 ppid=2331 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:22.124000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:39:22.126422 systemd-resolved[1128]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:39:22.128367 env[1188]: time="2024-02-12T19:39:22.127858448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:39:22.128367 env[1188]: time="2024-02-12T19:39:22.127890650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:39:22.128367 env[1188]: time="2024-02-12T19:39:22.127900009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:39:22.128367 env[1188]: time="2024-02-12T19:39:22.127989051Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86 pid=2721 runtime=io.containerd.runc.v2 Feb 12 19:39:22.272980 systemd-resolved[1128]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:39:22.273647 env[1188]: time="2024-02-12T19:39:22.273602926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-d69g6,Uid:63c921d3-6482-4280-a1be-aea3dbb6c75c,Namespace:default,Attempt:1,} returns sandbox id \"7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421\"" Feb 12 19:39:22.275054 env[1188]: time="2024-02-12T19:39:22.275022957Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:39:22.283132 env[1188]: time="2024-02-12T19:39:22.283085681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-726pv,Uid:285c6db4-31c3-48f8-ba5c-36202e194f38,Namespace:calico-system,Attempt:1,} returns sandbox id \"6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86\"" Feb 12 19:39:22.734475 kubelet[1544]: E0212 19:39:22.734324 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:23.734876 kubelet[1544]: E0212 19:39:23.734829 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:23.960623 systemd-networkd[1069]: cali4a9d767146b: Gained IPv6LL Feb 12 19:39:24.152596 systemd-networkd[1069]: calibef62572c3a: Gained IPv6LL Feb 12 19:39:24.735607 kubelet[1544]: E0212 19:39:24.735553 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:25.736349 kubelet[1544]: E0212 19:39:25.736286 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:26.093773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695963492.mount: Deactivated successfully. Feb 12 19:39:26.737226 kubelet[1544]: E0212 19:39:26.737189 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:27.737843 kubelet[1544]: E0212 19:39:27.737791 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:28.528144 env[1188]: time="2024-02-12T19:39:28.528086273Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:28.531497 env[1188]: time="2024-02-12T19:39:28.531474799Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:28.533448 env[1188]: time="2024-02-12T19:39:28.533418371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:28.535409 env[1188]: time="2024-02-12T19:39:28.535380489Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:28.536030 env[1188]: time="2024-02-12T19:39:28.535997814Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:39:28.536639 env[1188]: time="2024-02-12T19:39:28.536531338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 12 19:39:28.537607 env[1188]: time="2024-02-12T19:39:28.537562669Z" level=info msg="CreateContainer within sandbox \"7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:39:28.559165 env[1188]: time="2024-02-12T19:39:28.559096308Z" level=info msg="CreateContainer within sandbox \"7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"083fa66a4be4a841210d450275758878cfaca55569e3d26cdb64e0cf8057602c\"" Feb 12 19:39:28.559698 env[1188]: time="2024-02-12T19:39:28.559661453Z" level=info msg="StartContainer for \"083fa66a4be4a841210d450275758878cfaca55569e3d26cdb64e0cf8057602c\"" Feb 12 19:39:28.598236 env[1188]: time="2024-02-12T19:39:28.598190928Z" level=info msg="StartContainer for \"083fa66a4be4a841210d450275758878cfaca55569e3d26cdb64e0cf8057602c\" returns successfully" Feb 12 19:39:28.737975 kubelet[1544]: E0212 19:39:28.737934 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:28.928564 kubelet[1544]: I0212 19:39:28.928468 1544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-d69g6" podStartSLOduration=-9.223372016926348e+09 pod.CreationTimestamp="2024-02-12 19:39:09 +0000 UTC" firstStartedPulling="2024-02-12 19:39:22.27467749 +0000 UTC m=+48.851402189" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:39:28.928310877 +0000 UTC m=+55.505035576" watchObservedRunningTime="2024-02-12 19:39:28.928428402 +0000 UTC m=+55.505153131" Feb 12 19:39:29.738780 kubelet[1544]: E0212 19:39:29.738729 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:30.358097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168507523.mount: Deactivated successfully. Feb 12 19:39:30.739303 kubelet[1544]: E0212 19:39:30.739182 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:30.747373 env[1188]: time="2024-02-12T19:39:30.747325189Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:30.748968 env[1188]: time="2024-02-12T19:39:30.748943321Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:30.750365 env[1188]: time="2024-02-12T19:39:30.750322126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:30.751758 env[1188]: time="2024-02-12T19:39:30.751731187Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:30.752186 env[1188]: time="2024-02-12T19:39:30.752158588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 12 19:39:30.753477 env[1188]: time="2024-02-12T19:39:30.753447008Z" level=info msg="CreateContainer within sandbox \"6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 12 19:39:30.793930 env[1188]: time="2024-02-12T19:39:30.793894682Z" level=info msg="CreateContainer within sandbox \"6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bd930b01eafb14c45cbfe7a5ae9d4628fad45ccaabfc075b400549d7268d030f\"" Feb 12 19:39:30.794303 env[1188]: time="2024-02-12T19:39:30.794273097Z" level=info msg="StartContainer for \"bd930b01eafb14c45cbfe7a5ae9d4628fad45ccaabfc075b400549d7268d030f\"" Feb 12 19:39:30.832846 env[1188]: time="2024-02-12T19:39:30.831873852Z" level=info msg="StartContainer for \"bd930b01eafb14c45cbfe7a5ae9d4628fad45ccaabfc075b400549d7268d030f\" returns successfully" Feb 12 19:39:30.833044 env[1188]: time="2024-02-12T19:39:30.833017354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 12 19:39:31.042000 audit[2893]: NETFILTER_CFG table=filter:87 family=2 entries=18 op=nft_register_rule pid=2893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:31.046359 kernel: kauditd_printk_skb: 116 callbacks suppressed Feb 12 19:39:31.046429 kernel: audit: type=1325 audit(1707766771.042:270): table=filter:87 family=2 entries=18 op=nft_register_rule pid=2893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:31.046458 kernel: audit: type=1300 audit(1707766771.042:270): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd7c4f0b10 a2=0 a3=7ffd7c4f0afc items=0 ppid=1744 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:31.042000 audit[2893]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd7c4f0b10 a2=0 a3=7ffd7c4f0afc items=0 ppid=1744 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:31.047498 kubelet[1544]: I0212 19:39:31.047467 1544 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:39:31.052180 kernel: audit: type=1327 audit(1707766771.042:270): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:31.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:31.043000 audit[2893]: NETFILTER_CFG table=nat:88 family=2 entries=78 op=nft_register_rule pid=2893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:31.043000 audit[2893]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd7c4f0b10 a2=0 a3=7ffd7c4f0afc items=0 ppid=1744 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:31.061254 kernel: audit: type=1325 audit(1707766771.043:271): table=nat:88 family=2 entries=78 op=nft_register_rule pid=2893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:31.061369 kernel: audit: type=1300 audit(1707766771.043:271): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd7c4f0b10 a2=0 a3=7ffd7c4f0afc items=0 ppid=1744 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:31.061397 kernel: audit: type=1327 audit(1707766771.043:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:31.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:31.080000 audit[2922]: NETFILTER_CFG table=filter:89 family=2 entries=30 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:31.080000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffcf4b3bd90 a2=0 a3=7ffcf4b3bd7c items=0 ppid=1744 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:31.086958 kernel: audit: type=1325 audit(1707766771.080:272): table=filter:89 family=2 entries=30 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:31.087002 kernel: audit: type=1300 audit(1707766771.080:272): arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffcf4b3bd90 a2=0 a3=7ffcf4b3bd7c items=0 ppid=1744 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:31.087020 kernel: audit: type=1327 audit(1707766771.080:272): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:31.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:31.081000 audit[2922]: NETFILTER_CFG table=nat:90 family=2 entries=78 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:31.081000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffcf4b3bd90 a2=0 a3=7ffcf4b3bd7c items=0 ppid=1744 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:31.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:31.096353 kernel: audit: type=1325 audit(1707766771.081:273): table=nat:90 family=2 entries=78 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:31.206618 kubelet[1544]: I0212 19:39:31.206581 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/487a5a6d-5362-435e-8646-a44c3f2fb0b9-data\") pod \"nfs-server-provisioner-0\" (UID: \"487a5a6d-5362-435e-8646-a44c3f2fb0b9\") " pod="default/nfs-server-provisioner-0" Feb 12 19:39:31.206618 kubelet[1544]: I0212 19:39:31.206622 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2zr\" (UniqueName: \"kubernetes.io/projected/487a5a6d-5362-435e-8646-a44c3f2fb0b9-kube-api-access-qr2zr\") pod \"nfs-server-provisioner-0\" (UID: \"487a5a6d-5362-435e-8646-a44c3f2fb0b9\") " pod="default/nfs-server-provisioner-0" Feb 12 19:39:31.350663 env[1188]: time="2024-02-12T19:39:31.350590019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:487a5a6d-5362-435e-8646-a44c3f2fb0b9,Namespace:default,Attempt:0,}" Feb 12 19:39:31.431846 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:39:31.431959 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 12 19:39:31.432282 systemd-networkd[1069]: cali60e51b789ff: Link UP Feb 12 19:39:31.432491 systemd-networkd[1069]: cali60e51b789ff: Gained carrier Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.384 [INFO][2924] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 487a5a6d-5362-435e-8646-a44c3f2fb0b9 1016 0 2024-02-12 19:39:31 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.89 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.384 [INFO][2924] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.402 [INFO][2938] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" HandleID="k8s-pod-network.bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Workload="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.411 [INFO][2938] ipam_plugin.go 268: Auto assigning IP ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" HandleID="k8s-pod-network.bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Workload="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025dbb0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.89", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-12 19:39:31.402481206 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.411 [INFO][2938] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.411 [INFO][2938] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.411 [INFO][2938] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.412 [INFO][2938] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" host="10.0.0.89" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.414 [INFO][2938] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.417 [INFO][2938] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.418 [INFO][2938] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.420 [INFO][2938] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.420 [INFO][2938] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" host="10.0.0.89" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.421 [INFO][2938] ipam.go 1682: Creating new handle: k8s-pod-network.bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1 Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.424 [INFO][2938] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" host="10.0.0.89" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.427 [INFO][2938] ipam.go 1216: Successfully claimed IPs: [192.168.98.3/26] block=192.168.98.0/26 handle="k8s-pod-network.bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" host="10.0.0.89" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.427 [INFO][2938] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.3/26] handle="k8s-pod-network.bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" host="10.0.0.89" Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.427 [INFO][2938] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:31.442422 env[1188]: 2024-02-12 19:39:31.427 [INFO][2938] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.3/26] IPv6=[] ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" HandleID="k8s-pod-network.bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Workload="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:39:31.443203 env[1188]: 2024-02-12 19:39:31.428 [INFO][2924] k8s.go 385: Populated endpoint ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"487a5a6d-5362-435e-8646-a44c3f2fb0b9", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.98.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:31.443203 env[1188]: 2024-02-12 19:39:31.428 [INFO][2924] k8s.go 386: Calico CNI using IPs: [192.168.98.3/32] ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:39:31.443203 env[1188]: 2024-02-12 19:39:31.428 [INFO][2924] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:39:31.443203 env[1188]: 2024-02-12 19:39:31.432 [INFO][2924] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:39:31.443400 env[1188]: 2024-02-12 19:39:31.432 [INFO][2924] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"487a5a6d-5362-435e-8646-a44c3f2fb0b9", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.98.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3a:57:18:2f:94:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:31.443400 env[1188]: 2024-02-12 19:39:31.438 [INFO][2924] k8s.go 491: Wrote updated endpoint to datastore ContainerID="bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:39:31.452476 env[1188]: time="2024-02-12T19:39:31.452197405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:39:31.452476 env[1188]: time="2024-02-12T19:39:31.452261118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:39:31.452476 env[1188]: time="2024-02-12T19:39:31.452289803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:39:31.452681 env[1188]: time="2024-02-12T19:39:31.452555662Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1 pid=2965 runtime=io.containerd.runc.v2 Feb 12 19:39:31.460000 audit[2979]: NETFILTER_CFG table=filter:91 family=2 entries=38 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:39:31.460000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=19500 a0=3 a1=7ffe9afcbbf0 a2=0 a3=7ffe9afcbbdc items=0 ppid=2331 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:31.460000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:39:31.477551 systemd-resolved[1128]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:39:31.499254 env[1188]: time="2024-02-12T19:39:31.499203851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:487a5a6d-5362-435e-8646-a44c3f2fb0b9,Namespace:default,Attempt:0,} returns sandbox id \"bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1\"" Feb 12 19:39:31.739795 kubelet[1544]: E0212 19:39:31.739659 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:32.739964 kubelet[1544]: E0212 19:39:32.739924 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:32.792478 systemd-networkd[1069]: cali60e51b789ff: Gained IPv6LL Feb 12 19:39:32.853523 env[1188]: time="2024-02-12T19:39:32.853478386Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:32.855052 env[1188]: time="2024-02-12T19:39:32.855018807Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:32.856761 env[1188]: time="2024-02-12T19:39:32.856730615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:32.858037 env[1188]: time="2024-02-12T19:39:32.858009565Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:32.858414 env[1188]: time="2024-02-12T19:39:32.858389973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 12 19:39:32.859141 env[1188]: time="2024-02-12T19:39:32.859105273Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:39:32.859754 env[1188]: time="2024-02-12T19:39:32.859726633Z" level=info msg="CreateContainer within sandbox \"6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 12 19:39:32.870851 env[1188]: time="2024-02-12T19:39:32.870817447Z" level=info msg="CreateContainer within sandbox \"6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e12c7904397311d537a45c92f8beab2ce2192096358a809d31e381689aac9c1b\"" Feb 12 19:39:32.871223 env[1188]: time="2024-02-12T19:39:32.871193567Z" level=info msg="StartContainer for \"e12c7904397311d537a45c92f8beab2ce2192096358a809d31e381689aac9c1b\"" Feb 12 19:39:32.908187 env[1188]: time="2024-02-12T19:39:32.908148909Z" level=info msg="StartContainer for \"e12c7904397311d537a45c92f8beab2ce2192096358a809d31e381689aac9c1b\" returns successfully" Feb 12 19:39:32.938768 kubelet[1544]: I0212 19:39:32.938726 1544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-726pv" podStartSLOduration=-9.223371989916086e+09 pod.CreationTimestamp="2024-02-12 19:38:46 +0000 UTC" firstStartedPulling="2024-02-12 19:39:22.283765552 +0000 UTC m=+48.860490252" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:39:32.938544648 +0000 UTC m=+59.515269347" watchObservedRunningTime="2024-02-12 19:39:32.938688794 +0000 UTC m=+59.515413483" Feb 12 19:39:33.703790 kubelet[1544]: E0212 19:39:33.703727 1544 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:33.710874 env[1188]: time="2024-02-12T19:39:33.710823621Z" level=info msg="StopPodSandbox for \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\"" Feb 12 19:39:33.741071 kubelet[1544]: E0212 19:39:33.741035 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.745 [WARNING][3053] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"63c921d3-6482-4280-a1be-aea3dbb6c75c", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421", Pod:"nginx-deployment-8ffc5cf85-d69g6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4a9d767146b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.745 [INFO][3053] k8s.go 578: Cleaning up netns ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.745 [INFO][3053] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" iface="eth0" netns="" Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.746 [INFO][3053] k8s.go 585: Releasing IP address(es) ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.746 [INFO][3053] utils.go 188: Calico CNI releasing IP address ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.766 [INFO][3060] ipam_plugin.go 415: Releasing address using handleID ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" HandleID="k8s-pod-network.24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.766 [INFO][3060] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.766 [INFO][3060] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.775 [WARNING][3060] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" HandleID="k8s-pod-network.24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.775 [INFO][3060] ipam_plugin.go 443: Releasing address using workloadID ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" HandleID="k8s-pod-network.24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.777 [INFO][3060] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:33.779520 env[1188]: 2024-02-12 19:39:33.778 [INFO][3053] k8s.go 591: Teardown processing complete. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:33.779975 env[1188]: time="2024-02-12T19:39:33.779541490Z" level=info msg="TearDown network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\" successfully" Feb 12 19:39:33.779975 env[1188]: time="2024-02-12T19:39:33.779567650Z" level=info msg="StopPodSandbox for \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\" returns successfully" Feb 12 19:39:33.780643 env[1188]: time="2024-02-12T19:39:33.780625726Z" level=info msg="RemovePodSandbox for \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\"" Feb 12 19:39:33.780707 env[1188]: time="2024-02-12T19:39:33.780648850Z" level=info msg="Forcibly stopping sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\"" Feb 12 19:39:33.796487 kubelet[1544]: I0212 19:39:33.795479 1544 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 12 19:39:33.796487 kubelet[1544]: I0212 19:39:33.795512 1544 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.008 [WARNING][3083] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"63c921d3-6482-4280-a1be-aea3dbb6c75c", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"7824ef162519b6b4050c805f919cf4c69b5facd6a89338530fcc8ba9e374a421", Pod:"nginx-deployment-8ffc5cf85-d69g6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4a9d767146b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.008 [INFO][3083] k8s.go 578: Cleaning up netns ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.008 [INFO][3083] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" iface="eth0" netns="" Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.008 [INFO][3083] k8s.go 585: Releasing IP address(es) ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.008 [INFO][3083] utils.go 188: Calico CNI releasing IP address ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.021 [INFO][3093] ipam_plugin.go 415: Releasing address using handleID ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" HandleID="k8s-pod-network.24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.021 [INFO][3093] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.021 [INFO][3093] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.026 [WARNING][3093] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" HandleID="k8s-pod-network.24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.026 [INFO][3093] ipam_plugin.go 443: Releasing address using workloadID ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" HandleID="k8s-pod-network.24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--d69g6-eth0" Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.027 [INFO][3093] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:34.029203 env[1188]: 2024-02-12 19:39:34.028 [INFO][3083] k8s.go 591: Teardown processing complete. ContainerID="24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8" Feb 12 19:39:34.030043 env[1188]: time="2024-02-12T19:39:34.029221065Z" level=info msg="TearDown network for sandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\" successfully" Feb 12 19:39:34.130779 env[1188]: time="2024-02-12T19:39:34.129622058Z" level=info msg="RemovePodSandbox \"24ed231634f1d542e43ae47d45d1835e185cd8b0b86c275995b17342dae1c5f8\" returns successfully" Feb 12 19:39:34.130779 env[1188]: time="2024-02-12T19:39:34.130120772Z" level=info msg="StopPodSandbox for \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\"" Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.159 [WARNING][3117] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-csi--node--driver--726pv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"285c6db4-31c3-48f8-ba5c-36202e194f38", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 38, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86", Pod:"csi-node-driver-726pv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibef62572c3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.159 [INFO][3117] k8s.go 578: Cleaning up netns ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.159 [INFO][3117] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" iface="eth0" netns="" Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.159 [INFO][3117] k8s.go 585: Releasing IP address(es) ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.159 [INFO][3117] utils.go 188: Calico CNI releasing IP address ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.177 [INFO][3125] ipam_plugin.go 415: Releasing address using handleID ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" HandleID="k8s-pod-network.4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.177 [INFO][3125] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.177 [INFO][3125] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.183 [WARNING][3125] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" HandleID="k8s-pod-network.4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.183 [INFO][3125] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" HandleID="k8s-pod-network.4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.184 [INFO][3125] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:34.186347 env[1188]: 2024-02-12 19:39:34.185 [INFO][3117] k8s.go 591: Teardown processing complete. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:34.186886 env[1188]: time="2024-02-12T19:39:34.186362894Z" level=info msg="TearDown network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\" successfully" Feb 12 19:39:34.186886 env[1188]: time="2024-02-12T19:39:34.186389174Z" level=info msg="StopPodSandbox for \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\" returns successfully" Feb 12 19:39:34.187185 env[1188]: time="2024-02-12T19:39:34.187131525Z" level=info msg="RemovePodSandbox for \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\"" Feb 12 19:39:34.187245 env[1188]: time="2024-02-12T19:39:34.187178225Z" level=info msg="Forcibly stopping sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\"" Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.215 [WARNING][3147] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-csi--node--driver--726pv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"285c6db4-31c3-48f8-ba5c-36202e194f38", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 38, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"6c64f6f6bf50e4b08d8350491279031f03966a6392b11cc1465ff0821ef0eb86", Pod:"csi-node-driver-726pv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibef62572c3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.215 [INFO][3147] k8s.go 578: Cleaning up netns ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.215 [INFO][3147] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" iface="eth0" netns="" Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.215 [INFO][3147] k8s.go 585: Releasing IP address(es) ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.215 [INFO][3147] utils.go 188: Calico CNI releasing IP address ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.239 [INFO][3155] ipam_plugin.go 415: Releasing address using handleID ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" HandleID="k8s-pod-network.4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.239 [INFO][3155] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.239 [INFO][3155] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.246 [WARNING][3155] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" HandleID="k8s-pod-network.4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.246 [INFO][3155] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" HandleID="k8s-pod-network.4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Workload="10.0.0.89-k8s-csi--node--driver--726pv-eth0" Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.247 [INFO][3155] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:34.249638 env[1188]: 2024-02-12 19:39:34.248 [INFO][3147] k8s.go 591: Teardown processing complete. ContainerID="4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942" Feb 12 19:39:34.249638 env[1188]: time="2024-02-12T19:39:34.249597676Z" level=info msg="TearDown network for sandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\" successfully" Feb 12 19:39:34.255618 env[1188]: time="2024-02-12T19:39:34.255579942Z" level=info msg="RemovePodSandbox \"4ed3946a07a83e959be5196b3b5902fdc26eab258d7c77ebe3d0c24b2dc5b942\" returns successfully" Feb 12 19:39:34.741755 kubelet[1544]: E0212 19:39:34.741691 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:35.116779 kubelet[1544]: E0212 19:39:35.116742 1544 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:39:35.316044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682853316.mount: Deactivated successfully. Feb 12 19:39:35.742221 kubelet[1544]: E0212 19:39:35.742165 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:36.742583 kubelet[1544]: E0212 19:39:36.742536 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:37.672908 env[1188]: time="2024-02-12T19:39:37.672858650Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:37.674549 env[1188]: time="2024-02-12T19:39:37.674525455Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:37.676022 env[1188]: time="2024-02-12T19:39:37.676002848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:37.677378 env[1188]: time="2024-02-12T19:39:37.677357607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:37.677911 env[1188]: time="2024-02-12T19:39:37.677887349Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 19:39:37.679432 env[1188]: time="2024-02-12T19:39:37.679410450Z" level=info msg="CreateContainer within sandbox \"bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:39:37.689379 env[1188]: time="2024-02-12T19:39:37.689351725Z" level=info msg="CreateContainer within sandbox \"bd1422869f443b6794b0ca38a9b3b3b97097717b5a3e67b0016a8104316fb9b1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"78253a895463cc7c73b173123243c23af0a142e0b77838c77db999121d6a9a0e\"" Feb 12 19:39:37.689678 env[1188]: time="2024-02-12T19:39:37.689653291Z" level=info msg="StartContainer for \"78253a895463cc7c73b173123243c23af0a142e0b77838c77db999121d6a9a0e\"" Feb 12 19:39:37.708531 systemd[1]: run-containerd-runc-k8s.io-78253a895463cc7c73b173123243c23af0a142e0b77838c77db999121d6a9a0e-runc.WbPrUn.mount: Deactivated successfully. Feb 12 19:39:37.727149 env[1188]: time="2024-02-12T19:39:37.727106628Z" level=info msg="StartContainer for \"78253a895463cc7c73b173123243c23af0a142e0b77838c77db999121d6a9a0e\" returns successfully" Feb 12 19:39:37.742763 kubelet[1544]: E0212 19:39:37.742733 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:37.950807 kubelet[1544]: I0212 19:39:37.950706 1544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372029904097e+09 pod.CreationTimestamp="2024-02-12 19:39:31 +0000 UTC" firstStartedPulling="2024-02-12 19:39:31.50036649 +0000 UTC m=+58.077091189" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:39:37.949599161 +0000 UTC m=+64.526323860" watchObservedRunningTime="2024-02-12 19:39:37.950679095 +0000 UTC m=+64.527403794" Feb 12 19:39:37.969038 kubelet[1544]: I0212 19:39:37.969020 1544 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:39:37.984000 audit[3273]: NETFILTER_CFG table=filter:92 family=2 entries=18 op=nft_register_rule pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:37.985815 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 12 19:39:37.985894 kernel: audit: type=1325 audit(1707766777.984:275): table=filter:92 family=2 entries=18 op=nft_register_rule pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:37.984000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcf1df5000 a2=0 a3=7ffcf1df4fec items=0 ppid=1744 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:37.990565 kernel: audit: type=1300 audit(1707766777.984:275): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffcf1df5000 a2=0 a3=7ffcf1df4fec items=0 ppid=1744 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:37.990592 kernel: audit: type=1327 audit(1707766777.984:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:37.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:37.988000 audit[3273]: NETFILTER_CFG table=nat:93 family=2 entries=162 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:37.988000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffcf1df5000 a2=0 a3=7ffcf1df4fec items=0 ppid=1744 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:38.000538 kernel: audit: type=1325 audit(1707766777.988:276): table=nat:93 family=2 entries=162 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:38.000582 kernel: audit: type=1300 audit(1707766777.988:276): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffcf1df5000 a2=0 a3=7ffcf1df4fec items=0 ppid=1744 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:38.000600 kernel: audit: type=1327 audit(1707766777.988:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:37.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:38.033000 audit[3300]: NETFILTER_CFG table=filter:94 family=2 entries=7 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:38.033000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc91b184e0 a2=0 a3=7ffc91b184cc items=0 ppid=1744 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:38.040109 kernel: audit: type=1325 audit(1707766778.033:277): table=filter:94 family=2 entries=7 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:38.040156 kernel: audit: type=1300 audit(1707766778.033:277): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc91b184e0 a2=0 a3=7ffc91b184cc items=0 ppid=1744 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:38.040176 kernel: audit: type=1327 audit(1707766778.033:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:38.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:38.037000 audit[3300]: NETFILTER_CFG table=nat:95 family=2 entries=198 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:38.037000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffc91b184e0 a2=0 a3=7ffc91b184cc items=0 ppid=1744 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:38.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:38.046353 kernel: audit: type=1325 audit(1707766778.037:278): table=nat:95 family=2 entries=198 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:38.139905 kubelet[1544]: I0212 19:39:38.139877 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4w74\" (UniqueName: \"kubernetes.io/projected/6cd30744-d5a8-44a8-8b76-a23fc8b5de9a-kube-api-access-c4w74\") pod \"calico-apiserver-564b86d5f4-b5tfn\" (UID: \"6cd30744-d5a8-44a8-8b76-a23fc8b5de9a\") " pod="calico-apiserver/calico-apiserver-564b86d5f4-b5tfn" Feb 12 19:39:38.140059 kubelet[1544]: I0212 19:39:38.139929 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6cd30744-d5a8-44a8-8b76-a23fc8b5de9a-calico-apiserver-certs\") pod \"calico-apiserver-564b86d5f4-b5tfn\" (UID: \"6cd30744-d5a8-44a8-8b76-a23fc8b5de9a\") " pod="calico-apiserver/calico-apiserver-564b86d5f4-b5tfn" Feb 12 19:39:38.272111 env[1188]: time="2024-02-12T19:39:38.272079630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564b86d5f4-b5tfn,Uid:6cd30744-d5a8-44a8-8b76-a23fc8b5de9a,Namespace:calico-apiserver,Attempt:0,}" Feb 12 19:39:38.354406 systemd-networkd[1069]: calib9fbd0798be: Link UP Feb 12 19:39:38.355777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:39:38.355833 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib9fbd0798be: link becomes ready Feb 12 19:39:38.355921 systemd-networkd[1069]: calib9fbd0798be: Gained carrier Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.305 [INFO][3309] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0 calico-apiserver-564b86d5f4- calico-apiserver 6cd30744-d5a8-44a8-8b76-a23fc8b5de9a 1106 0 2024-02-12 19:39:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564b86d5f4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.89 calico-apiserver-564b86d5f4-b5tfn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9fbd0798be [] []}} ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Namespace="calico-apiserver" Pod="calico-apiserver-564b86d5f4-b5tfn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.305 [INFO][3309] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Namespace="calico-apiserver" Pod="calico-apiserver-564b86d5f4-b5tfn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.325 [INFO][3323] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" HandleID="k8s-pod-network.83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Workload="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.333 [INFO][3323] ipam_plugin.go 268: Auto assigning IP ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" HandleID="k8s-pod-network.83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Workload="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029d8a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.89", "pod":"calico-apiserver-564b86d5f4-b5tfn", "timestamp":"2024-02-12 19:39:38.325037875 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.333 [INFO][3323] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.333 [INFO][3323] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.333 [INFO][3323] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.335 [INFO][3323] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" host="10.0.0.89" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.337 [INFO][3323] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.339 [INFO][3323] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.340 [INFO][3323] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.345 [INFO][3323] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.345 [INFO][3323] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" host="10.0.0.89" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.346 [INFO][3323] ipam.go 1682: Creating new handle: k8s-pod-network.83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4 Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.348 [INFO][3323] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" host="10.0.0.89" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.351 [INFO][3323] ipam.go 1216: Successfully claimed IPs: [192.168.98.4/26] block=192.168.98.0/26 handle="k8s-pod-network.83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" host="10.0.0.89" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.351 [INFO][3323] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.4/26] handle="k8s-pod-network.83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" host="10.0.0.89" Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.352 [INFO][3323] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:38.366845 env[1188]: 2024-02-12 19:39:38.352 [INFO][3323] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.4/26] IPv6=[] ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" HandleID="k8s-pod-network.83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Workload="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" Feb 12 19:39:38.367445 env[1188]: 2024-02-12 19:39:38.353 [INFO][3309] k8s.go 385: Populated endpoint ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Namespace="calico-apiserver" Pod="calico-apiserver-564b86d5f4-b5tfn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0", GenerateName:"calico-apiserver-564b86d5f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6cd30744-d5a8-44a8-8b76-a23fc8b5de9a", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564b86d5f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"calico-apiserver-564b86d5f4-b5tfn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9fbd0798be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:38.367445 env[1188]: 2024-02-12 19:39:38.353 [INFO][3309] k8s.go 386: Calico CNI using IPs: [192.168.98.4/32] ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Namespace="calico-apiserver" Pod="calico-apiserver-564b86d5f4-b5tfn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" Feb 12 19:39:38.367445 env[1188]: 2024-02-12 19:39:38.353 [INFO][3309] dataplane_linux.go 68: Setting the host side veth name to calib9fbd0798be ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Namespace="calico-apiserver" Pod="calico-apiserver-564b86d5f4-b5tfn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" Feb 12 19:39:38.367445 env[1188]: 2024-02-12 19:39:38.355 [INFO][3309] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Namespace="calico-apiserver" Pod="calico-apiserver-564b86d5f4-b5tfn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" Feb 12 19:39:38.367445 env[1188]: 2024-02-12 19:39:38.356 [INFO][3309] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Namespace="calico-apiserver" Pod="calico-apiserver-564b86d5f4-b5tfn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0", GenerateName:"calico-apiserver-564b86d5f4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6cd30744-d5a8-44a8-8b76-a23fc8b5de9a", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564b86d5f4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4", Pod:"calico-apiserver-564b86d5f4-b5tfn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9fbd0798be", MAC:"ee:d6:2b:8c:f1:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:38.367445 env[1188]: 2024-02-12 19:39:38.364 [INFO][3309] k8s.go 491: Wrote updated endpoint to datastore ContainerID="83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4" Namespace="calico-apiserver" Pod="calico-apiserver-564b86d5f4-b5tfn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--564b86d5f4--b5tfn-eth0" Feb 12 19:39:38.377000 audit[3351]: NETFILTER_CFG table=filter:96 family=2 entries=61 op=nft_register_chain pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:39:38.377000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=30956 a0=3 a1=7fff0c2e2c30 a2=0 a3=7fff0c2e2c1c items=0 ppid=2331 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:38.377000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:39:38.493835 env[1188]: time="2024-02-12T19:39:38.493773542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:39:38.494029 env[1188]: time="2024-02-12T19:39:38.493817206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:39:38.494029 env[1188]: time="2024-02-12T19:39:38.493830190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:39:38.494117 env[1188]: time="2024-02-12T19:39:38.494083524Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4 pid=3359 runtime=io.containerd.runc.v2 Feb 12 19:39:38.513384 systemd-resolved[1128]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:39:38.535548 env[1188]: time="2024-02-12T19:39:38.535431973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564b86d5f4-b5tfn,Uid:6cd30744-d5a8-44a8-8b76-a23fc8b5de9a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4\"" Feb 12 19:39:38.537777 env[1188]: time="2024-02-12T19:39:38.537755902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 12 19:39:38.743491 kubelet[1544]: E0212 19:39:38.743459 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:39.744457 kubelet[1544]: E0212 19:39:39.744419 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:39.896477 systemd-networkd[1069]: calib9fbd0798be: Gained IPv6LL Feb 12 19:39:40.744573 kubelet[1544]: E0212 19:39:40.744541 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:41.745095 kubelet[1544]: E0212 19:39:41.745052 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:42.746173 kubelet[1544]: E0212 19:39:42.746125 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:42.767958 env[1188]: time="2024-02-12T19:39:42.767898502Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:42.769677 env[1188]: time="2024-02-12T19:39:42.769644282Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:42.771297 env[1188]: time="2024-02-12T19:39:42.771272358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:42.772980 env[1188]: time="2024-02-12T19:39:42.772951480Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:42.773683 env[1188]: time="2024-02-12T19:39:42.773643100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 12 19:39:42.775651 env[1188]: time="2024-02-12T19:39:42.775623348Z" level=info msg="CreateContainer within sandbox \"83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 12 19:39:42.785449 env[1188]: time="2024-02-12T19:39:42.785404084Z" level=info msg="CreateContainer within sandbox \"83b44c8c3a8ff30c0866bb09ad1aba1f43dcc1564adf61e2add77fb1a25ddfc4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8f7ce1ae38b0cd3d7f56362ac6b96ab4c674cfd30a54c9572aa940a7b346fd4e\"" Feb 12 19:39:42.785858 env[1188]: time="2024-02-12T19:39:42.785829716Z" level=info msg="StartContainer for \"8f7ce1ae38b0cd3d7f56362ac6b96ab4c674cfd30a54c9572aa940a7b346fd4e\"" Feb 12 19:39:42.838524 env[1188]: time="2024-02-12T19:39:42.838466984Z" level=info msg="StartContainer for \"8f7ce1ae38b0cd3d7f56362ac6b96ab4c674cfd30a54c9572aa940a7b346fd4e\" returns successfully" Feb 12 19:39:42.959803 kubelet[1544]: I0212 19:39:42.959674 1544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-564b86d5f4-b5tfn" podStartSLOduration=-9.223372030895142e+09 pod.CreationTimestamp="2024-02-12 19:39:37 +0000 UTC" firstStartedPulling="2024-02-12 19:39:38.53744048 +0000 UTC m=+65.114165179" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:39:42.958555586 +0000 UTC m=+69.535280285" watchObservedRunningTime="2024-02-12 19:39:42.959633462 +0000 UTC m=+69.536358161" Feb 12 19:39:42.991000 audit[3458]: NETFILTER_CFG table=filter:97 family=2 entries=8 op=nft_register_rule pid=3458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:42.992485 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 12 19:39:42.992526 kernel: audit: type=1325 audit(1707766782.991:280): table=filter:97 family=2 entries=8 op=nft_register_rule pid=3458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:42.991000 audit[3458]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd61ab1660 a2=0 a3=7ffd61ab164c items=0 ppid=1744 pid=3458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:42.997287 kernel: audit: type=1300 audit(1707766782.991:280): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd61ab1660 a2=0 a3=7ffd61ab164c items=0 ppid=1744 pid=3458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:42.997345 kernel: audit: type=1327 audit(1707766782.991:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:42.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:43.011858 kernel: audit: type=1325 audit(1707766783.003:281): table=nat:98 family=2 entries=198 op=nft_register_rule pid=3458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:43.011973 kernel: audit: type=1300 audit(1707766783.003:281): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd61ab1660 a2=0 a3=7ffd61ab164c items=0 ppid=1744 pid=3458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:43.012003 kernel: audit: type=1327 audit(1707766783.003:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:43.003000 audit[3458]: NETFILTER_CFG table=nat:98 family=2 entries=198 op=nft_register_rule pid=3458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:43.003000 audit[3458]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd61ab1660 a2=0 a3=7ffd61ab164c items=0 ppid=1744 pid=3458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:43.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:43.746880 kubelet[1544]: E0212 19:39:43.746847 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:43.769000 audit[3487]: NETFILTER_CFG table=filter:99 family=2 entries=8 op=nft_register_rule pid=3487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:43.769000 audit[3487]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc663f18d0 a2=0 a3=7ffc663f18bc items=0 ppid=1744 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:43.774857 kernel: audit: type=1325 audit(1707766783.769:282): table=filter:99 family=2 entries=8 op=nft_register_rule pid=3487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:43.774893 kernel: audit: type=1300 audit(1707766783.769:282): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc663f18d0 a2=0 a3=7ffc663f18bc items=0 ppid=1744 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:43.774912 kernel: audit: type=1327 audit(1707766783.769:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:43.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:43.772000 audit[3487]: NETFILTER_CFG table=nat:100 family=2 entries=198 op=nft_register_rule pid=3487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:43.772000 audit[3487]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffc663f18d0 a2=0 a3=7ffc663f18bc items=0 ppid=1744 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:43.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:39:43.781350 kernel: audit: type=1325 audit(1707766783.772:283): table=nat:100 family=2 entries=198 op=nft_register_rule pid=3487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:39:44.747308 kubelet[1544]: E0212 19:39:44.747276 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:45.748356 kubelet[1544]: E0212 19:39:45.748298 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:46.749059 kubelet[1544]: E0212 19:39:46.749010 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:47.749585 kubelet[1544]: E0212 19:39:47.749553 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:48.750416 kubelet[1544]: E0212 19:39:48.750380 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:49.751351 kubelet[1544]: E0212 19:39:49.751281 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:50.752036 kubelet[1544]: E0212 19:39:50.751913 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:51.753071 kubelet[1544]: E0212 19:39:51.753024 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:51.901717 kubelet[1544]: I0212 19:39:51.901679 1544 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:39:52.002001 kubelet[1544]: I0212 19:39:52.001975 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmrc\" (UniqueName: \"kubernetes.io/projected/2b47601d-4fc2-4a26-b05a-504369c30fa6-kube-api-access-lcmrc\") pod \"test-pod-1\" (UID: \"2b47601d-4fc2-4a26-b05a-504369c30fa6\") " pod="default/test-pod-1" Feb 12 19:39:52.002175 kubelet[1544]: I0212 19:39:52.002015 1544 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-91e52d01-276d-4056-8d58-1850780991ad\" (UniqueName: \"kubernetes.io/nfs/2b47601d-4fc2-4a26-b05a-504369c30fa6-pvc-91e52d01-276d-4056-8d58-1850780991ad\") pod \"test-pod-1\" (UID: \"2b47601d-4fc2-4a26-b05a-504369c30fa6\") " pod="default/test-pod-1" Feb 12 19:39:52.116737 kernel: Failed to create system directory netfs Feb 12 19:39:52.116832 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 12 19:39:52.116850 kernel: audit: type=1400 audit(1707766792.110:284): avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.116875 kernel: Failed to create system directory netfs Feb 12 19:39:52.116893 kernel: audit: type=1400 audit(1707766792.110:284): avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.116910 kernel: Failed to create system directory netfs Feb 12 19:39:52.110000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.110000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.110000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.119156 kernel: audit: type=1400 audit(1707766792.110:284): avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.119200 kernel: Failed to create system directory netfs Feb 12 19:39:52.119667 kernel: audit: type=1400 audit(1707766792.110:284): avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.110000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.110000 audit[3493]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562d8f1d75e0 a1=153bc a2=562d8d7482b0 a3=5 items=0 ppid=68 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.125398 kernel: audit: type=1300 audit(1707766792.110:284): arch=c000003e syscall=175 success=yes exit=0 a0=562d8f1d75e0 a1=153bc a2=562d8d7482b0 a3=5 items=0 ppid=68 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.110000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:39:52.130497 kernel: audit: type=1327 audit(1707766792.110:284): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.134323 kernel: Failed to create system directory fscache Feb 12 19:39:52.134354 kernel: audit: type=1400 audit(1707766792.128:285): avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.134374 kernel: Failed to create system directory fscache Feb 12 19:39:52.134388 kernel: audit: type=1400 audit(1707766792.128:285): avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.137508 kernel: Failed to create system directory fscache Feb 12 19:39:52.137544 kernel: audit: type=1400 audit(1707766792.128:285): avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.137575 kernel: Failed to create system directory fscache Feb 12 19:39:52.139915 kernel: audit: type=1400 audit(1707766792.128:285): avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.142876 kernel: Failed to create system directory fscache Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.144584 kernel: Failed to create system directory fscache Feb 12 19:39:52.144736 kernel: Failed to create system directory fscache Feb 12 19:39:52.144760 kernel: Failed to create system directory fscache Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.145570 kernel: Failed to create system directory fscache Feb 12 19:39:52.145617 kernel: Failed to create system directory fscache Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.146546 kernel: Failed to create system directory fscache Feb 12 19:39:52.146570 kernel: Failed to create system directory fscache Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.147549 kernel: Failed to create system directory fscache Feb 12 19:39:52.147619 kernel: Failed to create system directory fscache Feb 12 19:39:52.128000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.128000 audit[3493]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562d8f3ec9c0 a1=4c0fc a2=562d8d7482b0 a3=5 items=0 ppid=68 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.128000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:39:52.150360 kernel: FS-Cache: Loaded Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.175587 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.175616 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.175630 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.175644 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.176577 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.176601 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.177556 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.177577 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.178554 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.178575 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.179539 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.179557 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.180520 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.180544 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.181501 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.181518 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.182488 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.182505 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.183485 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.183503 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.184484 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.184505 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.185479 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.185496 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.186491 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.186514 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.187480 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.187501 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.188469 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.188495 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.189458 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.189474 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.190442 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.190458 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.191432 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.191449 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.192423 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.192439 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.193406 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.193435 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.194389 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.194407 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.195373 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.195396 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.196368 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.196412 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.197349 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.197375 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.198831 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.198867 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.198898 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.199821 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.199860 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.200802 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.200829 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.201790 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.201808 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.202785 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.202809 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.203791 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.203808 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.204358 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.205387 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.205418 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.206399 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.206430 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.207429 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.207467 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.208413 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.208442 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.209410 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.209449 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.210397 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.210424 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.211373 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.211399 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.212362 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.212381 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.213353 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.213379 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.214362 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.214393 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.215818 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.215849 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.215878 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.216801 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.216827 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.217778 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.217798 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.218759 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.218789 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.219771 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.220760 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.220776 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.220794 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.221745 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.221761 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.222720 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.222748 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.223693 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.223714 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.224675 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.224707 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.225658 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.225696 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.226631 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.226657 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.227646 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.227666 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.228627 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.228655 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.229601 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.229627 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.230575 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.230595 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.231580 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.231607 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.232555 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.232576 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.233529 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.233553 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.234499 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.234528 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.235506 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.235537 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.164000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.236489 kernel: Failed to create system directory sunrpc Feb 12 19:39:52.245672 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:39:52.245702 kernel: RPC: Registered udp transport module. Feb 12 19:39:52.245723 kernel: RPC: Registered tcp transport module. Feb 12 19:39:52.245739 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:39:52.164000 audit[3493]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562d8f438ad0 a1=1588c4 a2=562d8d7482b0 a3=5 items=6 ppid=68 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.164000 audit: CWD cwd="/" Feb 12 19:39:52.164000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:52.164000 audit: PATH item=1 name=(null) inode=25410 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:52.164000 audit: PATH item=2 name=(null) inode=25410 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:52.164000 audit: PATH item=3 name=(null) inode=25411 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:52.164000 audit: PATH item=4 name=(null) inode=25410 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:52.164000 audit: PATH item=5 name=(null) inode=25412 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:39:52.164000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.271657 kernel: Failed to create system directory nfs Feb 12 19:39:52.271677 kernel: Failed to create system directory nfs Feb 12 19:39:52.271690 kernel: Failed to create system directory nfs Feb 12 19:39:52.271702 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.272591 kernel: Failed to create system directory nfs Feb 12 19:39:52.272608 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.273520 kernel: Failed to create system directory nfs Feb 12 19:39:52.273539 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.274464 kernel: Failed to create system directory nfs Feb 12 19:39:52.274486 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.275430 kernel: Failed to create system directory nfs Feb 12 19:39:52.275460 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.276384 kernel: Failed to create system directory nfs Feb 12 19:39:52.276405 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.277755 kernel: Failed to create system directory nfs Feb 12 19:39:52.277772 kernel: Failed to create system directory nfs Feb 12 19:39:52.277784 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.278728 kernel: Failed to create system directory nfs Feb 12 19:39:52.278745 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.279785 kernel: Failed to create system directory nfs Feb 12 19:39:52.279804 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.280710 kernel: Failed to create system directory nfs Feb 12 19:39:52.280736 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.281639 kernel: Failed to create system directory nfs Feb 12 19:39:52.281659 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.282586 kernel: Failed to create system directory nfs Feb 12 19:39:52.282609 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.283518 kernel: Failed to create system directory nfs Feb 12 19:39:52.283538 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.284472 kernel: Failed to create system directory nfs Feb 12 19:39:52.284511 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.285400 kernel: Failed to create system directory nfs Feb 12 19:39:52.285428 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.286499 kernel: Failed to create system directory nfs Feb 12 19:39:52.286526 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.287448 kernel: Failed to create system directory nfs Feb 12 19:39:52.287479 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.288374 kernel: Failed to create system directory nfs Feb 12 19:39:52.288406 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.289806 kernel: Failed to create system directory nfs Feb 12 19:39:52.289831 kernel: Failed to create system directory nfs Feb 12 19:39:52.289853 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.291513 kernel: Failed to create system directory nfs Feb 12 19:39:52.291537 kernel: Failed to create system directory nfs Feb 12 19:39:52.291562 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.292445 kernel: Failed to create system directory nfs Feb 12 19:39:52.292470 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.293370 kernel: Failed to create system directory nfs Feb 12 19:39:52.293397 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.294776 kernel: Failed to create system directory nfs Feb 12 19:39:52.294802 kernel: Failed to create system directory nfs Feb 12 19:39:52.294820 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.295709 kernel: Failed to create system directory nfs Feb 12 19:39:52.295746 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.296628 kernel: Failed to create system directory nfs Feb 12 19:39:52.296652 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.297567 kernel: Failed to create system directory nfs Feb 12 19:39:52.297593 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.263000 audit[3493]: AVC avc: denied { confidentiality } for pid=3493 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.314348 kernel: Failed to create system directory nfs Feb 12 19:39:52.263000 audit[3493]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562d8f5db680 a1=e29dc a2=562d8d7482b0 a3=5 items=0 ppid=68 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.263000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:39:52.327357 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.359795 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.359813 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.359825 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.360747 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.360774 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.361691 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.361711 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.362632 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.362652 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.363580 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.363599 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.364524 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.364547 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.365469 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.365498 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.366418 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.366438 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.367589 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.367627 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.370143 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.370194 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.373695 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.373728 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.373750 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.375559 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.375598 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.375612 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.376532 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.376553 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.377496 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.377518 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.378438 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.378459 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.379374 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.379393 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.380789 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.380807 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.380827 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.381744 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.381765 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.382685 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.382702 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.383631 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.383648 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.384589 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.384617 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.385531 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.385568 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.386494 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.386517 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.387440 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.387457 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.388383 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.388399 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.389797 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.389813 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.389830 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.390746 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.390775 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.391690 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.391709 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.392632 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.392663 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.393771 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.393798 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.394727 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.394746 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.395684 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.395722 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.396623 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.396642 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.397578 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.397597 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.398528 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.398559 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.399473 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.399503 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.400413 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.400439 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.401355 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.401375 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.402795 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.402814 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.402826 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.403744 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.403763 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.404695 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.404716 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.405689 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.405720 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.406670 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.406689 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.407670 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.407689 kernel: Failed to create system directory nfs4 Feb 12 19:39:52.348000 audit[3499]: AVC avc: denied { confidentiality } for pid=3499 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.548356 kernel: NFS: Registering the id_resolver key type Feb 12 19:39:52.548448 kernel: Key type id_resolver registered Feb 12 19:39:52.548467 kernel: Key type id_legacy registered Feb 12 19:39:52.348000 audit[3499]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fdc536ae010 a1=1d3cc4 a2=557365c992b0 a3=5 items=0 ppid=68 pid=3499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.348000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.557541 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.557566 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.557580 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.557592 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.558522 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.558540 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.559509 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.559532 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.560488 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.560506 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.561466 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.561483 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.562447 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.562464 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.563430 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.563447 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.564430 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.564447 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.565448 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.565479 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.566466 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.566501 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.567481 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.567498 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.568497 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.568514 kernel: Failed to create system directory rpcgss Feb 12 19:39:52.552000 audit[3500]: AVC avc: denied { confidentiality } for pid=3500 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:39:52.552000 audit[3500]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f5fab5d8010 a1=4f524 a2=5639c7b1e2b0 a3=5 items=0 ppid=68 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.552000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 12 19:39:52.598362 nfsidmap[3508]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:39:52.600607 nfsidmap[3511]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:39:52.609000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2485 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:39:52.609000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2485 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:39:52.609000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2485 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:39:52.609000 audit[1272]: AVC avc: denied { watch_reads } for pid=1272 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2485 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:39:52.609000 audit[1272]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55e012f497f0 a2=10 a3=b25b7622ac486043 items=0 ppid=1 pid=1272 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.609000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:39:52.609000 audit[1272]: AVC avc: denied { watch_reads } for pid=1272 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2485 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:39:52.609000 audit[1272]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55e012f497f0 a2=10 a3=b25b7622ac486043 items=0 ppid=1 pid=1272 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.609000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:39:52.609000 audit[1272]: AVC avc: denied { watch_reads } for pid=1272 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2485 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:39:52.609000 audit[1272]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=d a1=55e012f497f0 a2=10 a3=b25b7622ac486043 items=0 ppid=1 pid=1272 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.609000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:39:52.754143 kubelet[1544]: E0212 19:39:52.754103 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:52.804823 env[1188]: time="2024-02-12T19:39:52.804780274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2b47601d-4fc2-4a26-b05a-504369c30fa6,Namespace:default,Attempt:0,}" Feb 12 19:39:52.891037 systemd-networkd[1069]: cali5ec59c6bf6e: Link UP Feb 12 19:39:52.892465 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:39:52.892565 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 12 19:39:52.892676 systemd-networkd[1069]: cali5ec59c6bf6e: Gained carrier Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.842 [INFO][3514] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-test--pod--1-eth0 default 2b47601d-4fc2-4a26-b05a-504369c30fa6 1175 0 2024-02-12 19:39:31 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.89 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.842 [INFO][3514] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.862 [INFO][3528] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" HandleID="k8s-pod-network.8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Workload="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.870 [INFO][3528] ipam_plugin.go 268: Auto assigning IP ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" HandleID="k8s-pod-network.8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Workload="10.0.0.89-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024bb20), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.89", "pod":"test-pod-1", "timestamp":"2024-02-12 19:39:52.862136758 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.870 [INFO][3528] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.870 [INFO][3528] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.870 [INFO][3528] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.871 [INFO][3528] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" host="10.0.0.89" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.875 [INFO][3528] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.878 [INFO][3528] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.879 [INFO][3528] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.880 [INFO][3528] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.880 [INFO][3528] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" host="10.0.0.89" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.882 [INFO][3528] ipam.go 1682: Creating new handle: k8s-pod-network.8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07 Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.884 [INFO][3528] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" host="10.0.0.89" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.888 [INFO][3528] ipam.go 1216: Successfully claimed IPs: [192.168.98.5/26] block=192.168.98.0/26 handle="k8s-pod-network.8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" host="10.0.0.89" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.888 [INFO][3528] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.5/26] handle="k8s-pod-network.8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" host="10.0.0.89" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.888 [INFO][3528] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.888 [INFO][3528] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.5/26] IPv6=[] ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" HandleID="k8s-pod-network.8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Workload="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.889 [INFO][3514] k8s.go 385: Populated endpoint ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"2b47601d-4fc2-4a26-b05a-504369c30fa6", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:52.900142 env[1188]: 2024-02-12 19:39:52.889 [INFO][3514] k8s.go 386: Calico CNI using IPs: [192.168.98.5/32] ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:39:52.900754 env[1188]: 2024-02-12 19:39:52.889 [INFO][3514] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:39:52.900754 env[1188]: 2024-02-12 19:39:52.892 [INFO][3514] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:39:52.900754 env[1188]: 2024-02-12 19:39:52.892 [INFO][3514] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"2b47601d-4fc2-4a26-b05a-504369c30fa6", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"5e:b9:17:52:86:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:39:52.900754 env[1188]: 2024-02-12 19:39:52.898 [INFO][3514] k8s.go 491: Wrote updated endpoint to datastore ContainerID="8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:39:52.908000 audit[3554]: NETFILTER_CFG table=filter:101 family=2 entries=38 op=nft_register_chain pid=3554 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:39:52.908000 audit[3554]: SYSCALL arch=c000003e syscall=46 success=yes exit=19064 a0=3 a1=7ffdb3a84dc0 a2=0 a3=7ffdb3a84dac items=0 ppid=2331 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:39:52.908000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:39:52.912483 env[1188]: time="2024-02-12T19:39:52.912431841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:39:52.912564 env[1188]: time="2024-02-12T19:39:52.912472599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:39:52.912564 env[1188]: time="2024-02-12T19:39:52.912487086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:39:52.912657 env[1188]: time="2024-02-12T19:39:52.912614058Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07 pid=3558 runtime=io.containerd.runc.v2 Feb 12 19:39:52.933010 systemd-resolved[1128]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:39:52.956799 env[1188]: time="2024-02-12T19:39:52.956767010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2b47601d-4fc2-4a26-b05a-504369c30fa6,Namespace:default,Attempt:0,} returns sandbox id \"8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07\"" Feb 12 19:39:52.958343 env[1188]: time="2024-02-12T19:39:52.958302061Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:39:53.703923 kubelet[1544]: E0212 19:39:53.703867 1544 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:53.754237 kubelet[1544]: E0212 19:39:53.754203 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:53.784303 env[1188]: time="2024-02-12T19:39:53.784258880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:53.786191 env[1188]: time="2024-02-12T19:39:53.786144678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:53.787945 env[1188]: time="2024-02-12T19:39:53.787908495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:53.789494 env[1188]: time="2024-02-12T19:39:53.789471440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:39:53.790058 env[1188]: time="2024-02-12T19:39:53.790026607Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:39:53.791734 env[1188]: time="2024-02-12T19:39:53.791704189Z" level=info msg="CreateContainer within sandbox \"8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:39:53.804765 env[1188]: time="2024-02-12T19:39:53.804321642Z" level=info msg="CreateContainer within sandbox \"8f82f1e80aedca5ebd7f79fc5b61a79573bd706fcb297d1dc899fa637c9d9d07\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"11ec6a2aae02f5a34681a8aebf1e49a1ed095f5bca70f0a4ca7d09c884a6fea0\"" Feb 12 19:39:53.805152 env[1188]: time="2024-02-12T19:39:53.805112878Z" level=info msg="StartContainer for \"11ec6a2aae02f5a34681a8aebf1e49a1ed095f5bca70f0a4ca7d09c884a6fea0\"" Feb 12 19:39:53.841914 env[1188]: time="2024-02-12T19:39:53.841858770Z" level=info msg="StartContainer for \"11ec6a2aae02f5a34681a8aebf1e49a1ed095f5bca70f0a4ca7d09c884a6fea0\" returns successfully" Feb 12 19:39:54.169112 systemd-networkd[1069]: cali5ec59c6bf6e: Gained IPv6LL Feb 12 19:39:54.754536 kubelet[1544]: E0212 19:39:54.754496 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:55.755113 kubelet[1544]: E0212 19:39:55.755064 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:56.756164 kubelet[1544]: E0212 19:39:56.756106 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:57.756380 kubelet[1544]: E0212 19:39:57.756322 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:39:58.757280 kubelet[1544]: E0212 19:39:58.757233 1544 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"